首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is intended as an investigation of parametric estimation for the randomly right censored data. In parametric estimation, the Kullback-Leibler information is used as a measure of the divergence of a true distribution generating a data relative to a distribution in an assumed parametric model M. When the data is uncensored, maximum likelihood estimator (MLE) is a consistent estimator of minimizing the Kullback-Leibler information, even if the assumed model M does not contain the true distribution. We call this property minimum Kullback-Leibler information consistency (MKLI-consistency). However, the MLE obtained by maximizing the likelihood function based on the censored data is not MKLI-consistent. As an alternative to the MLE, Oakes (1986, Biometrics, 42, 177–182) proposed an estimator termed approximate maximum likelihood estimator (AMLE) due to its computational advantage and potential for robustness. We show MKLI-consistency and asymptotic normality of the AMLE under the misspecification of the parametric model. In a simulation study, we investigate mean square errors of these two estimators and an estimator which is obtained by treating a jackknife corrected Kaplan-Meier integral as the log-likelihood. On the basis of the simulation results and the asymptotic results, we discuss comparison among these estimators. We also derive information criteria for the MLE and the AMLE under censorship, and which can be used not only for selecting models but also for selecting estimation procedures.  相似文献   

2.
Suppose we have a renewal process observed over a fixed length of time starting from a random time point and only the times of renewals that occur within the observation window are recorded. Assuming a parametric model for the renewal time distribution with parameter θ, we obtain the likelihood of the observed data and describe the exact and asymptotic behavior of the Fisher information (FI) on θ contained in this window censored renewal process. We illustrate our results with exponential, gamma, and Weibull models for the renewal distribution. We use the FI matrix to determine optimal window length for designing experiments with recurring events when the total time of observation is fixed. Our results are useful in estimating the standard errors of the maximum likelihood estimators and in determining the sample size and duration of clinical trials that involve recurring events associated with diseases such as lupus.  相似文献   

3.
Chen and Bhattacharyya (1988,Comm. Statist. Theory Methods,17, 1857–1870) derived the exact distribution of the maximum likelihood estimator of the mean of an exponential distribution and an exact lower confidence bound for the mean based on a hybrid censored sample. In this paper, an alternative simple form for the distribution is obtained and is shown to be equivalent to that of Chen and Bhattacharyya (1988). Noting that this scheme, which would guarantee the experiment to terminate by a fixed timeT, may result in few failures, we propose a new hybrid censoring scheme which guarantees at least a fixed number of failures in a life testing experiment. The exact distribution of the MLE as well as an exact lower confidence bound for the mean is also obtained for this case. Finally, three examples are presented to illustrate all the results developed here.  相似文献   

4.
We propose a procedure to construct the empirical likelihood ratio confidence interval for the mean using a resampling method. This approach leads to the definition of a likelihood function for censored data, called weighted empirical likelihood function. With the second order expansion of the log likelihood ratio, a weighted empirical likelihood ratio confidence interval for the mean is proposed and shown by simulation studies to have comparable coverage accuracy to alternative methods, including the nonparametric bootstrap-t. The procedures proposed here apply in a unified way to different types of censored data, such as right censored data, doubly censored data and interval censored data, and computationally more efficient than the bootstrap-t method. An example of a set of doubly censored breast cancer data is presented with the application of our methods.  相似文献   

5.
We consider the use ofB-spline nonparametric regression models estimated by the maximum penalized likelihood method for extracting information from data with complex nonlinear structure. Crucial points inB-spline smoothing are the choices of a smoothing parameter and the number of basis functions, for which several selectors have been proposed based on cross-validation and Akaike information criterion known as AIC. It might be however noticed that AIC is a criterion for evaluating models estimated by the maximum likelihood method, and it was derived under the assumption that the ture distribution belongs to the specified parametric model. In this paper we derive information criteria for evaluatingB-spline nonparametric regression models estimated by the maximum penalized likelihood method in the context of generalized linear models under model misspecification. We use Monte Carlo experiments and real data examples to examine the properties of our criteria including various selectors proposed previously.  相似文献   

6.
Representation theorem and local asymptotic minimax theorem are derived for nonparametric estimators of the distribution function on the basis of randomly truncated data. The convolution-type representation theorem asserts that the limiting process of any regular estimator of the distribution function is at least as dispersed as the limiting process of the product-limit estimator. The theorems are similar to those results for the complete data case due to Beran (1977, Ann. Statist., 5, 400–404) and for the censored data case due to Wellner (1982, Ann. Statist., 10, 595–602). Both likelihood and functional approaches are considered and the proofs rely on the method of Begun et al. (1983, Ann. Statist., 11, 432–452) with slight modifications.Division of Biostatistics, School of Public Health, Columbia Univ.  相似文献   

7.
In the paper, the maximum likelihood estimator for an unknown density is investigated. We deal with censored observations and construct a confidence region for this density. Bibliography: 4 titles. __________ Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 341, 2007, pp. 220–228.  相似文献   

8.
In applied statistics, the coefficient of variation is widely used. However, inference concerning the coefficient of variation of non-normal distributions are rarely reported. In this article, a simulation-based Bayesian approach is adopted to estimate the coefficient of variation (CV) under progressive first-failure censored data from Gompertz distribution. The sampling schemes such as, first-failure censoring, progressive type II censoring, type II censoring and complete sample can be obtained as special cases of the progressive first-failure censored scheme. The simulation-based approach will give us a point estimate as well as the empirical sampling distribution of CV. The joint prior density as a product of conditional gamma density and inverted gamma density for the unknown Gompertz parameters are considered. In addition, the results of maximum likelihood and parametric bootstrap techniques are also proposed. An analysis of a real life data set is presented for illustrative purposes. Results from simulation studies assessing the performance of our proposed method are included.  相似文献   

9.
This article considers the estimation of parameters of Weibull distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood method under step-stress partially accelerated test model. The maximum likelihood estimates (MLEs) of the unknown parameters are obtained by Newton–Raphson algorithm. Also, the approximate Fisher information matrix is obtained for constructing asymptotic confidence bounds for the model parameters. The biases and mean square errors of the maximum likelihood estimators are computed to assess their performances through a Monte Carlo simulation study.  相似文献   

10.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

11.
讨论了定数截尾样本下双参数指数分布环境因子的极大似然估计、区间估计和Bayes估计.以参数后验密度的商密度作为环境因子的后验密度,并结合专家经验运用Bayes方法给出了环境因子在平方损失下和LINEX损失下的Bayes估计.最后运用Monte Carlo方法对各估计结果的均方误差(MSE),进行了模拟比较.结果表明LINEX损失下环境因子的估计较好.  相似文献   

12.
Portmanteau test statistics are useful for checking the adequacy of many time series models. Here we generalized the omnibus procedure proposed by Duchesne and Roy (2004,Journal of Multivariate Analysis,89, 148–180) for multivariate stationary autoregressive models with exogenous variables (VARX) to the case of cointegrated (or partially nonstationary) VARX models. We show that for cointegrated VARX time series, the test statistic obtained by comparing the spectral density of the errors under the null hypothesis of non-correlation with a kernel-based spectral density estimator, is asymptotically standard normal. The parameters of the model can be estimated by conditional maximum likelihood or by asymptotically equivalent estimation procedures. The procedure relies on a truncation point or a smoothing parameter. We state conditions under which the asymptotic distribution of the test statistic is unaffected by a data-dependent method. The finite sample properties of the test statistics are studied via a small simulation study.  相似文献   

13.
This paper introduces some Bayesian optimal design methods for step-stress accelerated life test planning with one accelerating variable, when the acceleration model is linear in the accelerated variable or its function, based on censored data from a log-location-scale distributions. In order to find the optimal plan, we propose different Monte Carlo simulation algorithms for different Bayesian optimal criteria. We present an example using the lognormal life distribution with Type-I censoring to illustrate the different Bayesian methods and to examine the effects of the prior distribution and sample size. By comparing the different Bayesian methods we suggest that when the data have large(small) sample size B1(τ) (B2(τ)) method is adopted. Finally, the Bayesian optimal plans are compared with the plan obtained by maximum likelihood method.  相似文献   

14.
The reliability for Weibull distribution with homogeneous heavily censored data is analyzed in this study. The universal model of heavily censored data and existing methods, including maximum likelihood, least-squares, E-Bayesian estimation, and hierarchical Bayesian methods, are introduced. An improved method is proposed based on Bayesian inference and least-squares method. In this method, the Bayes estimations of failure probabilities are focused on for all the samples. The conjugate prior distribution of failure probability is set, and an optimization model is developed by maximizing the information entropy of prior distribution to determine the hyper-parameters. By integrating the likelihood function, the posterior distribution of failure probability is then derived to yield the Bayes estimation of failure probability. The estimations of reliability parameters are obtained by fitting distribution curve using least-squares method. The four existing methods are compared with the proposed method in terms of applicability, precision, efficiency, robustness, and simplicity. Specifically, the closed form expressions concerning E-Bayesian estimation and hierarchical Bayesian methods are derived and used. The comparisons demonstrate that the improved method is superior. Finally, three illustrative examples are presented to show the application of the proposed method.  相似文献   

15.
In this paper, we consider the model selection problem for discretely observed ergodic multi-dimensional diffusion processes. In order to evaluate the statistical models, Akaike’s information criterion (AIC) is a useful tool. Since AIC is constructed by the maximum log likelihood and the dimension of the parameter space, it may look easy to get AIC even for discretely observed diffusion processes. However, there is a serious problem that a transition density of a diffusion process does not generally have an explicit form. Instead of the exact log-likelihood, we use a contrast function based on a locally Gaussian approximation of the transition density and we propose the contrast-based information criterion.  相似文献   

16.
Estimating equation approaches have been widely used in statistics inference. Important examples of estimating equations are the likelihood equations. Since its introduction by Sir R. A. Fisher almost a century ago, maximum likelihood estimation (MLE) is still the most popular estimation method used for fitting probability distribution to data, including fitting lifetime distributions with censored data. However, MLE may produce substantial bias and even fail to obtain valid confidence intervals when data size is not large enough or there is censoring data. In this paper, based on nonlinear combinations of order statistics, we propose new estimation equation approaches for a class of probability distributions, which are particularly effective for skewed distributions with small sample sizes and censored data. The proposed approaches may possess a number of attractive properties such as consistency, sufficiency and uniqueness. Asymptotic normality of these new estimators is derived. The construction of new estimation equations and their numerical performance under different censored schemes are detailed via Weibull distribution and generalized exponential distribution.  相似文献   

17.
In this paper, we establish several recurrence relations satisfied by the single and product moments of progressive Type-II right censored order statistics from an exponential distribution. These relations may then be used, for example, to compute all the means, variances and covariances of exponential progressive Type-II right censored order statistics for all sample sizes n and all censoring schemes (R 1, R 2, ..., R m ), mn. The results presented in the paper generalize the results given by Joshi (1978, Sankhy Ser. B, 39, 362–371; 1982, J. Statist. Plann. Inference, 6, 13–16) for the single moments and product moments of order statistics from the exponential distribution.To further generalize these results, we consider also the right truncated exponential distribution. Recurrence relations for the single and product moments are established for progressive Type-II right censored order statistics from the right truncated exponential distribution.  相似文献   

18.
Distributions with unimodal densities are among the most commonly used in practice. However, for many unimodal distribution families the likelihood functions may be unbounded, thereby leading to inconsistent estimates. The maximum product of spacings (MPS) method, introduced by Cheng and Amin and independently by Ranneby, has been known to give consistent and asymptotically normal estimators in many parametric situations where the maximum likelihood method fails. In this paper, strong consistency theorems for the MPS method are obtained under general conditions which are comparable to the conditions of Bahadur and Wang for the maximum likelihood method. The consistency theorems obtained here apply to both parametric models and some nonparametric models. In particular, in any unimodal distribution family the asymptotic MPS estimator of the underlying unimodal density is shown to be universally L1 consistent without any further conditions (in parametric or nonparametric settings).  相似文献   

19.
Abstract

Akaike's information criterion (AIC), derived from asymptotics of the maximum likelihood estimator, is widely used in model selection. However, it has a finite-sample bias that produces overfitting in linear regression. To deal with this problem, Ishiguro, Sakamoto, and Kitagawa proposed a bootstrap-based extension to AIC which they called EIC. This article compares model-selection performance of AIC, EIC, a bootstrap-smoothed likelihood cross-validation (BCV) and its modification (632CV) in small-sample linear regression, logistic regression, and Cox regression. Simulation results show that EIC largely overcomes AIC's overfitting problem and that BCV may be better than EIC. Hence, the three methods based on bootstrapping the likelihood establish themselves as important alternatives to AIC in model selection with small samples.  相似文献   

20.
Empirical-likelihood-based inference for the parameters in a partially linear single-index model with randomly censored data is investigated. We introduce an estimated empirical likelihood for the parameters using a synthetic data approach and show that its limiting distribution is a mixture of central chi-squared distribution. To attack this difficulty we propose an adjusted empirical likelihood to achieve the standard χ2-limit. Furthermore, since the index is of norm 1, we use this constraint to reduce the dimension of parameters, which increases the accuracy of the confidence regions. A simulation study is carried out to compare its finite-sample properties with the existing method. An application to a real data set is illustrated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号