首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Geyer (J. Roy. Statist. Soc. 56 (1994) 291) proposed Monte Carlo method to approximate the whole likelihood function. His method is limited to choosing a proper reference point. We attempt to improve the method by assigning some prior information to the parameters and using the Gibbs output to evaluate the marginal likelihood and its derivatives through a Monte Carlo approximation. Vague priors are assigned to the parameters as well as the random effects within the Bayesian framework to represent a non-informative setting. Then the maximum likelihood estimates are obtained through the Newton Raphson method. Thus, out method serves as a bridge between Bayesian and classical approaches. The method is illustrated by analyzing the famous salamander mating data by generalized linear mixed models.  相似文献   

2.
This paper is concerned with using the E-Bayesian method for computing estimates of the unknown parameter and some survival time parameters e.g. reliability and hazard functions of Lomax distribution based on type-II censored data. These estimates are derived based on a conjugate prior for the parameter under the balanced squared error loss function. A comparison between the new method and the corresponding Bayes and maximum likelihood techniques is conducted using the Monte Carlo simulation.  相似文献   

3.
偏t正态分布是分析尖峰,厚尾数据的重要统计工具之一.研究提出了偏t正态数据下混合线性联合位置与尺度模型,通过EM算法和Newton-Raphson方法研究了该模型参数的极大似然估计.并通过随机模拟试验验证了所提出方法的有效性.最后,结合实际数据验证了该模型和方法具有实用性和可行性.  相似文献   

4.
Monte Carlo EM加速算法   总被引:6,自引:0,他引:6       下载免费PDF全文
罗季 《应用概率统计》2008,24(3):312-318
EM算法是近年来常用的求后验众数的估计的一种数据增广算法, 但由于求出其E步中积分的显示表达式有时很困难, 甚至不可能, 限制了其应用的广泛性. 而Monte Carlo EM算法很好地解决了这个问题, 将EM算法中E步的积分用Monte Carlo模拟来有效实现, 使其适用性大大增强. 但无论是EM算法, 还是Monte Carlo EM算法, 其收敛速度都是线性的, 被缺损信息的倒数所控制, 当缺损数据的比例很高时, 收敛速度就非常缓慢. 而Newton-Raphson算法在后验众数的附近具有二次收敛速率. 本文提出Monte Carlo EM加速算法, 将Monte Carlo EM算法与Newton-Raphson算法结合, 既使得EM算法中的E步用Monte Carlo模拟得以实现, 又证明了该算法在后验众数附近具有二次收敛速度. 从而使其保留了Monte Carlo EM算法的优点, 并改进了Monte Carlo EM算法的收敛速度. 本文通过数值例子, 将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较, 进一步说明了Monte Carlo EM加速算法的优良性.  相似文献   

5.
This article considers the estimation of parameters of Weibull distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood method under step-stress partially accelerated test model. The maximum likelihood estimates (MLEs) of the unknown parameters are obtained by Newton–Raphson algorithm. Also, the approximate Fisher information matrix is obtained for constructing asymptotic confidence bounds for the model parameters. The biases and mean square errors of the maximum likelihood estimators are computed to assess their performances through a Monte Carlo simulation study.  相似文献   

6.
This paper develops approximate Bayes estimators of the parameter of the Bilal failure model by using the method of Tierney and Kadane [Accurate approximations for posterior moments and marginal densities, J. Amer. Statist. Assoc. 81 (1986) 82–86.] based on Type-2 censored sample and four different loss functions. Existence and uniqueness theorem for the maximum likelihood estimate are established. Based on Monte Carlo simulation study, comparisons are made between those estimators and their corresponding Bayes estimators obtained by using Gibb’s sampling approach.  相似文献   

7.
Point maximum likelihood estimators for parameters, mean number of failures, and failure rate in a non–homogeneous Poisson process are derived, when only count data from k identical processes are available. Approximate confidence intervals based on the parametric bootstrap technique are considered. The performances of both the point and interval estimation procedures are assessed via Monte Carlo simulation.  相似文献   

8.
In this paper, we deal with parameter estimation of the log-logistic distribution. It is widely known that the maximum likelihood estimators (MLEs) are usually biased in the case of the finite sample size. This motivates a study of obtaining unbiased or nearly unbiased estimators for this distribution. Specifically, we consider a certain ‘corrective’ approach and Efron’s bootstrap resampling method, which both can reduce the biases of the MLEs to the second order of magnitude. As a comparison, the commonly used generalized moments method is also considered for estimating parameters. Monte Carlo simulation studies are conducted to compare the performances of the various estimators under consideration. Finally, two real-data examples are analyzed to illustrate the potential usefulness of the proposed estimators, especially when the sample size is small or moderate.  相似文献   

9.
In this paper, we investigate a competing risks model based on exponentiated Weibull distribution under Type-I progressively hybrid censoring scheme. To estimate the unknown parameters and reliability function, the maximum likelihood estimators and asymptotic confidence intervals are derived. Since Bayesian posterior density functions cannot be given in closed forms, we adopt Markov chain Monte Carlo method to calculate approximate Bayes estimators and highest posterior density credible intervals. To illustrate the estimation methods, a simulation study is carried out with numerical results. It is concluded that the maximum likelihood estimation and Bayesian estimation can be used for statistical inference in competing risks model under Type-I progressively hybrid censoring scheme.  相似文献   

10.
We propose a model for multinomial probit factor analysis by assuming t-distribution error in probit factor analysis. To obtain maximum likelihood estimation, we use the Monte Carlo expectation maximization algorithm with its M-step greatly simplified under conditional maximization and its E-step made feasible by Monte Carlo simulation. Standard errors are calculated by using Louis’s method. The methodology is illustrated with numerical simulations.  相似文献   

11.
We consider geometric process (GP) when the distribution of the first occurrence time of an event is assumed to be Weibull. Explicit estimators of the parameters in GP are derived by using the method of modified maximum likelihood (MML) proposed by Tiku [24]. Asymptotic distributions and consistency properties of these estimators are obtained. We show that our estimators are more efficient than the widely used modified moment (MM) estimators via Monte Carlo simulation study. Further, two real life examples are given at the end of the paper.  相似文献   

12.
The calibration of some stochastic differential equation used to model spot prices in electricity markets is investigated. As an alternative to relying on standard likelihood maximization, the adoption of a fully Bayesian paradigm is explored, that relies on Markov chain Monte Carlo (MCMC) stochastic simulation and provides the posterior distributions of the model parameters. The proposed method is applied to one‐ and two‐factor stochastic models, using both simulated and real data. The results demonstrate good agreement between the maximum likelihood and MCMC point estimates. The latter approach, however, provides a more complete characterization of the model uncertainty, an information that can be exploited to obtain a more realistic assessment of the forecasting error. In order to further validate the MCMC approach, the posterior distribution of the Italian electricity price volatility is explored for different maturities and compared with the corresponding maximum likelihood estimates.  相似文献   

13.
本文研究缺失数据下对数线性模型参数的极大似然估计问题.通过Monte-Carlo EM算法去拟合所提出的模型.其中,在期望步中利用Metropolis-Hastings算法产生一个缺失数据的样本,在最大化步中利用Newton-Raphson迭代使似然函数最大化.最后,利用观测数据的Fisher信息得到参数极大似然估计的渐近方差和标准误差.  相似文献   

14.
The traditional PAR process (Poisson autoregressive process) assumes that the arrival process is the equi-dispersed Poisson process, with its mean being equal to its variance. Whereas the arrival process in the real DGP (data generating process) could either be over-dispersed, with variance being greater than the mean, or under-dispersed, with variance being less than the mean. This paper proposes using the Katz family distributions to model the arrival process in the INAR process (integer valued autoregressive process with Katz arrivals) and deploying Monte Carlo simulations to examine the performance of maximum likelihood (ML) and method of moments (MM) estimators of INAR-Katz model. Finally, we used the INAR-Katz process to model count data of hospital emergency room visits for respiratory disease. The results show that the INAR-Katz model outperforms the Poisson model, PAR(1) model, and has great potential in empirical application.  相似文献   

15.
王佳  丁洁丽 《数学杂志》2015,35(6):1521-1532
本文研究了Newton-Raphson等算法无法进行时探寻更加稳定的数值解法的问题.利用B¨ohningLinday(1988)提出的二次下界算法(Quadratic lower-bound),文中在Logistic回归模型下构造了极大似然函数的代理函数并进行数值模拟,获得了二次下界算法是Newton-Raphson算法的合理替代的结果,推广了数值方法在Logistic回归模型中的应用.  相似文献   

16.
This article compares several estimation methods for nonlinear stochastic differential equations with discrete time measurements. The likelihood function is computed by Monte Carlo simulations of the transition probability (simulated maximum likelihood SML) using kernel density estimators and functional integrals and by using the extended Kalman filter (EKF and second-order nonlinear filter SNF). The relation with a local linearization method is discussed. A simulation study for a diffusion process in a double well potential (Ginzburg–Landau equation) shows that, for large sampling intervals, the SML methods lead to better estimation results than the likelihood approach via EKF and SNF. A second study using a nonlinear diffusion coefficient (generalized Cox–Ingersoll–Ross model) demonstrates that the EKF type estimators may serve as efficient alternatives to simple maximum quasilikelihood approaches and Monte Carlo methods.  相似文献   

17.
A method for constructing priors is proposed that allows the off-diagonal elements of the concentration matrix of Gaussian data to be zero. The priors have the property that the marginal prior distribution of the number of nonzero off-diagonal elements of the concentration matrix (referred to below as model size) can be specified flexibly. The priors have normalizing constants for each model size, rather than for each model, giving a tractable number of normalizing constants that need to be estimated. The article shows how to estimate the normalizing constants using Markov chain Monte Carlo simulation and supersedes the method of Wong et al. (2003) [24] because it is more accurate and more general. The method is applied to two examples. The first is a mixture of constrained Wisharts. The second is from Wong et al. (2003) [24] and decomposes the concentration matrix into a function of partial correlations and conditional variances using a mixture distribution on the matrix of partial correlations. The approach detects structural zeros in the concentration matrix and estimates the covariance matrix parsimoniously if the concentration matrix is sparse.  相似文献   

18.
We propose a multinomial probit (MNP) model that is defined by a factor analysis model with covariates for analyzing unordered categorical data, and discuss its identification. Some useful MNP models are special cases of the proposed model. To obtain maximum likelihood estimates, we use the EM algorithm with its M-step greatly simplified under Conditional Maximization and its E-step made feasible by Monte Carlo simulation. Standard errors are calculated by inverting a Monte Carlo approximation of the information matrix using Louis’s method. The methodology is illustrated with a simulated data.  相似文献   

19.
This paper describes a method for an objective selection of the optimal prior distribution, or for adjusting its hyper-parameter, among the competing priors for a variety of Bayesian models. In order to implement this method, the integration of very high dimensional functions is required to get the normalizing constants of the posterior and even of the prior distribution. The logarithm of the high dimensional integral is reduced to the one-dimensional integration of a cerain function with respect to the scalar parameter over the range of the unit interval. Having decided the prior, the Bayes estimate or the posterior mean is used mainly here in addition to the posterior mode. All of these are based on the simulation of Gibbs distributions such as Metropolis' Monte Carlo algorithm. The improvement of the integration's accuracy is substantial in comparison with the conventional crude Monte Carlo integration. In the present method, we have essentially no practical restrictions in modeling the prior and the likelihood. Illustrative artificial data of the lattice system are given to show the practicability of the present procedure.  相似文献   

20.
The Cox proportional hazards model is the most used statistical model in the analysis of survival time data.Recently,a random weighting method was proposed to approximate the distribution of the maximum partial likelihood estimate for the regression coefficient in the Cox model.This method was shown not as sensitive to heavy censoring as the bootstrap method in simulation studies but it may not be second-order accurate as was shown for the bootstrap approximation.In this paper,we propose an alternative random weighting method based on one-step linear jackknife pseudo values and prove the second accuracy of the proposed method.Monte Carlo simulations are also performed to evaluate the proposed method for fixed sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号