首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We examine three Bayesian case influence measures including the φ-divergence, Cook’s posterior mode distance, and Cook’s posterior mean distance for identifying a set of influential observations for a variety of statistical models with missing data including models for longitudinal data and latent variable models in the absence/presence of missing data. Since it can be computationally prohibitive to compute these Bayesian case influence measures in models with missing data, we derive simple first-order approximations to the three Bayesian case influence measures by using the Laplace approximation formula and examine the applications of these approximations to the identification of influential sets. All of the computations for the first-order approximations can be easily done using Markov chain Monte Carlo samples from the posterior distribution based on the full data. Simulated data and an AIDS dataset are analyzed to illustrate the methodology. Supplemental materials for the article are available online.  相似文献   

2.
Mixture models in reliability bring a useful compromise between parametric and nonparametric models, when several failure modes are suspected. The classical methods for estimation in mixture models rarely handle the additional difficulty coming from the fact that lifetime data are often censored, in a deterministic or random way. We present in this paper several iterative methods based on EM and Stochastic EM methodologies, that allow us to estimate parametric or semiparametric mixture models for randomly right censored lifetime data, provided they are identifiable. We consider different levels of completion for the (incomplete) observed data, and provide genuine or EM-like algorithms for several situations. In particular, we show that simulating the missing data coming from the mixture allows to plug a standard R package for survival data analysis in an EM algorithm’s M-step. Moreover, in censored semiparametric situations, a stochastic step is the only practical solution allowing computation of nonparametric estimates of the unknown survival function. The effectiveness of the new proposed algorithms are demonstrated in simulation studies and an actual dataset example from aeronautic industry.  相似文献   

3.
The paper presents an efficient solution to decision problems where direct partial information on the distribution of the states of nature is available, either by observations of previous repetitions of the decision problem or by direct expert judgements.To process this information we use a recent generalization of Walley’s imprecise Dirichlet model, allowing us also to handle incomplete observations or imprecise judgements, including missing data. We derive efficient algorithms and discuss properties of the optimal solutions with respect to several criteria, including Gamma-maximinity and E-admissibility. In the case of precise data and pure actions the former surprisingly leads us to a frequency-based variant of the Hodges–Lehmann criterion, which was developed in classical decision theory as a compromise between Bayesian and minimax procedures.  相似文献   

4.
Detection of multiple outliers or subset of influential points has been rarely considered in the linear measurement error models. In this paper a new influence statistic for one or a set of observations is generalized and characterized based on the corrected likelihood in the linear measurement error models. This influence statistic can be expressed in terms of the residuals and the leverages of linear measurement error regression. Unlike Cook’s statistic, this new measure of influence has asymptotically normal distribution and is able to detect a subset of high leverage outliers which is not identified by Cook’s statistic. As an illustrative example, simulation studies and a real data set are analysed.  相似文献   

5.
无失效数据情形失效率的综合估计   总被引:4,自引:0,他引:4  
对指数分布的无失效数据,提出了无失效数据情形失效率的综合估计法。在失效率的先验分布为截尾Gamma分布时,给出了失效率的多层Bayes估计。在引进失效信息后,在失效率的先验分布为截尾Gamma分布时,给出了失效率的多层Bayes估计和综合估计,并给出了可靠度的综合估计,结合实际问题进行了计算。  相似文献   

6.
生存分析中乘积限估计的大样本性质   总被引:3,自引:0,他引:3  
何书元 《数学进展》1998,27(6):481-500
生存分析中,人们关心的问题之一是利用不完全的寿命调查数据估计生物折寿命分布。在实际问题中,比较常见的不完全数据包括右删失数据,左截断数据和左截断右删失数据。利用这三种数据估计寿命分布时,常用的统计量是乘积限估计。于是,乘积限估计的大样本性质的研究一直受到关注。本文就这方面的研究近况做一比较系统的论述。  相似文献   

7.
本文针对Weibull分布定时截尾型试验数据提出了一种计算可靠度置信限的方法。通过采用数据填充的方式将不完全数据虚拟成完全数据,利用完全数据情形下可靠度置信限的计算方法得到删失数据情形下可靠度的置信限。模拟研究表明本文提出的算法具有较好的计算稳定性和可操作性。  相似文献   

8.
This paper discusses a principal–agent problem with multi-dimensional incomplete information between a principal and an agent. Firstly, how to describe the incomplete information in such agency problem is a challenging issue. This paper characterizes the incomplete information by uncertain variable, because it has been an appropriate tool to depict subjective assessment and model human uncertainty. Secondly, the relevant literature often used expected-utility-maximization to measure the two participators’ goals. However, Ellsberg paradox indicates that expected utility criterion is not always appropriate to be regarded as decision rule. For this reason, this paper presents another decision rule based on confidence level. Instead of expected-utility-maximization, the principal’s aim is to maximize his potential income under the acceptable confidence level, and the agent’s aim depends on whether he has private information about his efforts. According to the agent’s different decision rules, three classes of uncertain agency (UA) models and their respective optimal contracts are presented. Finally, a portfolio selection problem is studied to demonstrate the modeling idea and the viability of the proposed UA models.  相似文献   

9.
提出了面向感知数据融合的通用发生函数(UGF)改进算法,并使用该算法对线性拓扑结构的无线传感网络(WSN)可靠性进行了评估。首先对PEGASIS协议下WSN的线性拓扑结构及数据传输过程进行抽象,建立了双向连续k/n:F系统模型。然后根据WSN感知数据传输及融合方式,在改进算法中重新定义了传感节点的UGF表达式和组合算子。最后对双向连续k/n:F模型进行单向化分解,根据得到的单向模型可靠性推导出双向模型的可靠性表达式。通过具体实例对提出的改进算法进行了验证,计算结果显示改进的算法可有效解决传感网络线性拓扑结构可靠性评估问题。  相似文献   

10.
Sheng-Tun Li  Su-Yu Lin  Yi-Chung Cheng 《PAMM》2007,7(1):2010019-2010020
The study of fuzzy time series has increasingly attracted much attention due to its salient capabilities of tackling vague and incomplete data. A variety of forecasting models have devoted to improving forecasting accuracy, however, the issue of partitioning intervals has rarely been investigated. Recently, we proposed a novel deterministic forecasting model to eliminate the major overhead of determining the k-order issue in high-order models. This paper presents a continued work with focusing on handling the interval partitioning issue by applying the fuzzy c-means technology, which can take the distribution of data points into account and produce unequal-sized intervals. In addition, the forecasting model is extended to allow process twofactor problems. The accuracy superiority of the proposed model is demonstrated by conducting two empirical experiments and comparison to other existing models. The reliability of the forecasting model is further justified by using a Monte Carlo simulation and box plots. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
The gamma distribution is one of the commonly used statistical distribution in reliability. While maximum likelihood has traditionally been the main method for estimation of gamma parameters, Hirose has proposed a continuation method to parameter estimation for the three-parameter gamma distribution. In this paper, we propose to apply Markov chain Monte Carlo techniques to carry out a Bayesian estimation procedure using Hirose’s simulated data as well as two real data sets. The method is indeed flexible and inference for any quantity of interest is readily available.  相似文献   

12.
金融资产收益率序列的波动具有典型的尖峰厚尾和非对称性特征,描述这种特性需以合适的概率分布函数为基础.因此,寻求更好的概率分布函数对风险度量、VaR的计算有着十分重要的意义.有鉴于此引入Skewed-t分布度量VaR,并比较分析了RiskMetrics及FIGARCH类模型度量VaR值的准确程度,本文同时分析了多头头寸和空头头寸情况下的VaR.结果表明,在两种头寸情况下,Skewed-t分布在空头和多头情形对资产厚尾特性以及非对称性的拟合效果均要比正态分布好;在两种头寸中不同的置信水平下,FIAGARCH(CHUNG)模型预测的VaR值改进了使用传统模型的精确性,高估或低估风险的程度较轻.  相似文献   

13.
Kolmogorov discovered in 1933 that the empirical statistics of several independent values of any random variable differs from the true distribution function of this variable in some universal way: the random distribution of the distance of one of these statistics from the other verifies (asymptotically) some stochastic distribution law (called later “Kolmogorov’s distribution”). The present paper compares the Kolmogorov’s distribution with a similar object, provided by the chain of observations of a nonrandom, deterministic dynamical system, formed by the consecutive members of a geometrical progression. Say, the Kolmogorov’s distribution is observed for the distribution of the last pairs of digits of the powers of integer 3, that is, for the sequence 01,03,09,27,81,43,29,87,… (which is not random at all and does not verify the Kolmogorov’s theorem conditions).  相似文献   

14.
An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation interface covering a broad range of non-linear generalized structural equation models is described. The model and software are demonstrated in data of measurements of the serotonin transporter in the human brain.  相似文献   

15.
The parallel use of the Kalman ensemble filter technique for assimilating data from observations in theHYCOMmodel of theWorldOcean is described. Data from satellite observations of the sea’s surface temperature and the sea’s surface height are assimilated both separately and conjointly. Numerical experiments on correcting model calculations using data from observations are performed. The results from the corrections are compared to model calculations without assimilation. The effectiveness of the employed parallelization algorithm is confirmed.  相似文献   

16.
One of the most important issues for a development manager may be how to predict the reliability of a software system at an arbitrary testing time. In this paper, using the software failure-occurrence time data, we discuss a method of software reliability prediction based on software reliability growth models described by an NHPP (nonhomogeneous Poisson process). From the applied software reliability growth models, the conditional probability distribution of the time between software failures is derived, and its mean and median are obtained as reliability prediction measures. Finally, based on several numerical examples, we compare the performance between these measures from the view point of software reliability prediction in the testing phase.  相似文献   

17.
If a test consists of two parts the Spearman–Brown formula and Flanagan’s coefficient (Cronbach’s alpha) are standard tools for estimating the reliability. However, the coefficients may be inappropriate if their associated measurement models fail to hold. We study the robustness of reliability estimation in the two-part case to coefficient misspecification. We compare five reliability coefficients and study various conditions on the standard deviations and lengths of the parts. Various conditional upper bounds of the differences between the coefficients are derived. It is shown that the difference between the Spearman–Brown formula and Horst’s formula is negligible in many cases. We conclude that all five reliability coefficients can be used if there are only small or moderate differences between the standard deviations and the lengths of the parts.  相似文献   

18.
Decisions during the reliability growth development process of engineering equipment involve trade-offs between cost and risk. However slight, there exists a chance an item of equipment will not function as planned during its specified life. Consequently the producer can incur a financial penalty. To date, reliability growth research has focussed on the development of models to estimate the rate of failure from test data. Such models are used to support decisions about the effectiveness of options to improve reliability. The extension of reliability growth models to incorporate financial costs associated with ‘unreliability’ is much neglected. In this paper, we extend a Bayesian reliability growth model to include cost analysis. The rationale of the stochastic process underpinning the growth model and the cost structures are described. The ways in which this model can be used to support cost–benefit analysis during product development are discussed and illustrated through a simple case.  相似文献   

19.
导出了二元Friday-Patil型指数分布的一个特征,利用该特征获得了二元Friday-Patil型指数分布参数的最大似然估计及矩估计,给出了强度服从二元Friday-Patil型指数分布时系统可靠度的估计.  相似文献   

20.
Yanfei Wang  Claudia Kuenzer 《PAMM》2007,7(1):1042103-1042104
The determination of the aerosol particle size distribution function using the particle spectrum extinction equation is an ill-posed integral equation of the first kind, since as is known, we are often faced with limited or insufficient observations in remote sensing and the observations are contaminated. Physically, the particle size distribution is always nonnegative, and we are often faced with incomplete data. Therefore, the concept of maximum entropy from information theory and statistic mechanics can be used to counteract this problem of missing or erroneous data. Therefore, in this paper, we study the maximum entropy based regularization model and gradient methods for solving the corresponding optimization problem. Numerical tests are made for synthetic aerosol data to show the efficiency and feasibility of the proposed algorithms. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号