首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Implementations of the Monte Carlo EM Algorithm   总被引:1,自引:0,他引:1  
The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm where the expectation in the E-step is computed numerically through Monte Carlo simulations. The most exible and generally applicable approach to obtaining a Monte Carlo sample in each iteration of an MCEM algorithm is through Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis–Hastings samplers. Although MCMC estimation presents a tractable solution to problems where the E-step is not available in closed form, two issues arise when implementing this MCEM routine: (1) how do we minimize the computational cost in obtaining an MCMC sample? and (2) how do we choose the Monte Carlo sample size? We address the first question through an application of importance sampling whereby samples drawn during previous EM iterations are recycled rather than running an MCMC sampler each MCEM iteration. The second question is addressed through an application of regenerative simulation. We obtain approximate independent and identical samples by subsampling the generated MCMC sample during different renewal periods. Standard central limit theorems may thus be used to gauge Monte Carlo error. In particular, we apply an automated rule for increasing the Monte Carlo sample size when the Monte Carlo error overwhelms the EM estimate at any given iteration. We illustrate our MCEM algorithm through analyses of two datasets fit by generalized linear mixed models. As a part of these applications, we demonstrate the improvement in computational cost and efficiency of our routine over alternative MCEM strategies.  相似文献   

2.
This article compares several estimation methods for nonlinear stochastic differential equations with discrete time measurements. The likelihood function is computed by Monte Carlo simulations of the transition probability (simulated maximum likelihood SML) using kernel density estimators and functional integrals and by using the extended Kalman filter (EKF and second-order nonlinear filter SNF). The relation with a local linearization method is discussed. A simulation study for a diffusion process in a double well potential (Ginzburg–Landau equation) shows that, for large sampling intervals, the SML methods lead to better estimation results than the likelihood approach via EKF and SNF. A second study using a nonlinear diffusion coefficient (generalized Cox–Ingersoll–Ross model) demonstrates that the EKF type estimators may serve as efficient alternatives to simple maximum quasilikelihood approaches and Monte Carlo methods.  相似文献   

3.
ABC (approximate Bayesian computation) is a general approach for dealing with models with an intractable likelihood. In this work, we derive ABC algorithms based on QMC (quasi-Monte Carlo) sequences. We show that the resulting ABC estimates have a lower variance than their Monte Carlo counter-parts. We also develop QMC variants of sequential ABC algorithms, which progressively adapt the proposal distribution and the acceptance threshold. We illustrate our QMC approach through several examples taken from the ABC literature.  相似文献   

4.
Joint latent class modeling of disease prevalence and high-dimensional semicontinuous biomarker data has been proposed to study the relationship between diseases and their related biomarkers. However, statistical inference of the joint latent class modeling approach has proved very challenging due to its computational complexity in seeking maximum likelihood estimates. In this article, we propose a series of composite likelihoods for maximum composite likelihood estimation, as well as an enhanced Monte Carlo expectation–maximization (MCEM) algorithm for maximum likelihood estimation, in the context of joint latent class models. Theoretically, the maximum composite likelihood estimates are consistent and asymptotically normal. Numerically, we have shown that, as compared to the MCEM algorithm that maximizes the full likelihood, not only the composite likelihood approach that is coupled with the quasi-Newton method can substantially reduce the computational complexity and duration, but it can simultaneously retain comparative estimation efficiency.  相似文献   

5.
6.
This paper considers large sample inference for the regression parameter in a partly linear model for right censored data. We introduce an estimated empirical likelihood for the regression parameter and show that its limiting distribution is a mixture of central chi-squared distributions. A Monte Carlo method is proposed to approximate the limiting distribution. This enables one to make empirical likelihood-based inference for the regression parameter. We also develop an adjusted empirical likelihood method which only appeals to standard chi-square tables. Finite sample performance of the proposed methods is illustrated in a simulation study.  相似文献   

7.
When using a model-based approach to geostatistical problems, often, due to the complexity of the models, inference relies on Markov chain Monte Carlo methods. This article focuses on the generalized linear spatial models, and demonstrates that parameter estimation and model selection using Markov chain Monte Carlo maximum likelihood is a feasible and very useful technique. A dataset of radionuclide concentrations on Rongelap Island is used to illustrate the techniques. For this dataset we demonstrate that the log-link function is not a good choice, and that there exists additional nonspatial variation which cannot be attributed to the Poisson error distribution. We also show that the interpretation of this additional variation as either micro-scale variation or measurement error has a significant impact on predictions. The techniques presented in this article would also be useful for other types of geostatistical models.  相似文献   

8.
Geyer (J. Roy. Statist. Soc. 56 (1994) 291) proposed Monte Carlo method to approximate the whole likelihood function. His method is limited to choosing a proper reference point. We attempt to improve the method by assigning some prior information to the parameters and using the Gibbs output to evaluate the marginal likelihood and its derivatives through a Monte Carlo approximation. Vague priors are assigned to the parameters as well as the random effects within the Bayesian framework to represent a non-informative setting. Then the maximum likelihood estimates are obtained through the Newton Raphson method. Thus, out method serves as a bridge between Bayesian and classical approaches. The method is illustrated by analyzing the famous salamander mating data by generalized linear mixed models.  相似文献   

9.
Highly structured generalised response models, such as generalised linear mixed models and generalised linear models for time series regression, have become an indispensable vehicle for data analysis and inference in many areas of application. However, their use in practice is hindered by high-dimensional intractable integrals. Quasi-Monte Carlo (QMC) is a dynamic research area in the general problem of high-dimensional numerical integration, although its potential for statistical applications is yet to be fully explored. We survey recent research in QMC, particularly lattice rules, and report on its application to highly structured generalised response models. New challenges for QMC are identified and new methodologies are developed. QMC methods are seen to provide significant improvements compared with ordinary Monte Carlo methods.   相似文献   

10.
Although generalized linear mixed effects models have received much attention in the statistical literature, there is still no computationally efficient algorithm for computing maximum likelihood estimates for such models when there are a moderate number of random effects. Existing algorithms are either computationally intensive or they compute estimates from an approximate likelihood. Here we propose an algorithm—the spherical–radial algorithm—that is computationally efficient and computes maximum likelihood estimates. Although we concentrate on two-level, generalized linear mixed effects models, the same algorithm can be applied to many other models as well, including nonlinear mixed effects models and frailty models. The computational difficulty for estimation in these models is in integrating the joint distribution of the data and the random effects to obtain the marginal distribution of the data. Our algorithm uses a multidimensional quadrature rule developed in earlier literature to integrate the joint density. This article discusses how this rule may be combined with an optimization algorithm to efficiently compute maximum likelihood estimates. Because of stratification and other aspects of the quadrature rule, the resulting integral estimator has significantly less variance than can be obtained through simple Monte Carlo integration. Computational efficiency is achieved, in part, because relatively few evaluations of the joint density may be required in the numerical integration.  相似文献   

11.
We describe NIMBLE, a system for programming statistical algorithms for general model structures within R. NIMBLE is designed to meet three challenges: flexible model specification, a language for programming algorithms that can use different models, and a balance between high-level programmability and execution efficiency. For model specification, NIMBLE extends the BUGS language and creates model objects, which can manipulate variables, calculate log probability values, generate simulations, and query the relationships among variables. For algorithm programming, NIMBLE provides functions that operate with model objects using two stages of evaluation. The first stage allows specialization of a function to a particular model and/or nodes, such as creating a Metropolis-Hastings sampler for a particular block of nodes. The second stage allows repeated execution of computations using the results of the first stage. To achieve efficient second-stage computation, NIMBLE compiles models and functions via C++, using the Eigen library for linear algebra, and provides the user with an interface to compiled objects. The NIMBLE language represents a compilable domain-specific language (DSL) embedded within R. This article provides an overview of the design and rationale for NIMBLE along with illustrative examples including importance sampling, Markov chain Monte Carlo (MCMC) and Monte Carlo expectation maximization (MCEM). Supplementary materials for this article are available online.  相似文献   

12.
近20年来,金融中Levy模型与蒙特卡洛仿真技术日益受到重视. 在连续时间过程的金融建模中带跳跃的Levy模型相比于连续轨道的布朗运动模型能很好地刻画市场的跳跃,更好地拟合金融数据的统计特征,更准确地对衍生品定价. 但是,相较于经典的Black-Scholes模型,用Levy模型对衍生品定价以及求解对冲策略的计算复杂度大大增加. 蒙特卡洛仿真成为Levy模型计算中最重要的方法之一. 首先详细地介绍了Levy模型引入的背景,并引出仿真方法在其中重要的应用价值. 最后,简要地给出了Levy过程仿真及其梯度估计的基本方法.  相似文献   

13.
The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilizing a Markov chain Monte Carlo (MCMC) technique, ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. The ABC methodology arises because we work in a distribution-free setting in which we make no parametric assumptions, meaning we cannot evaluate the likelihood point-wise or in this case simulate directly from the likelihood model. The use of a bootstrap procedure allows us to generate samples from the intractable likelihood without the requirement of distributional assumptions; this is crucial to the ABC framework. The developed methodology is used to obtain the empirical distribution of the DFCL model parameters and the predictive distribution of the outstanding loss liabilities conditional on the observed claims. We then estimate predictive Bayesian capital estimates, the value at risk (VaR) and the mean square error of prediction (MSEP). The latter is compared with the classical bootstrap and credibility methods.  相似文献   

14.
Models with intractable likelihood functions arise in areas including network analysis and spatial statistics, especially those involving Gibbs random fields. Posterior parameter estimation in these settings is termed a doubly intractable problem because both the likelihood function and the posterior distribution are intractable. The comparison of Bayesian models is often based on the statistical evidence, the integral of the un-normalized posterior distribution over the model parameters which is rarely available in closed form. For doubly intractable models, estimating the evidence adds another layer of difficulty. Consequently, the selection of the model that best describes an observed network among a collection of exponential random graph models for network analysis is a daunting task. Pseudolikelihoods offer a tractable approximation to the likelihood but should be treated with caution because they can lead to an unreasonable inference. This article specifies a method to adjust pseudolikelihoods to obtain a reasonable, yet tractable, approximation to the likelihood. This allows implementation of widely used computational methods for evidence estimation and pursuit of Bayesian model selection of exponential random graph models for the analysis of social networks. Empirical comparisons to existing methods show that our procedure yields similar evidence estimates, but at a lower computational cost. Supplementary material for this article is available online.  相似文献   

15.
This article proposes a new approach for Bayesian and maximum likelihood parameter estimation for stationary Gaussian processes observed on a large lattice with missing values. We propose a Markov chain Monte Carlo approach for Bayesian inference, and a Monte Carlo expectation-maximization algorithm for maximum likelihood inference. Our approach uses data augmentation and circulant embedding of the covariance matrix, and provides likelihood-based inference for the parameters and the missing data. Using simulated data and an application to satellite sea surface temperatures in the Pacific Ocean, we show that our method provides accurate inference on lattices of sizes up to 512 × 512, and is competitive with two popular methods: composite likelihood and spectral approximations.  相似文献   

16.
Monte Carlo EM加速算法   总被引:6,自引:0,他引:6       下载免费PDF全文
罗季 《应用概率统计》2008,24(3):312-318
EM算法是近年来常用的求后验众数的估计的一种数据增广算法, 但由于求出其E步中积分的显示表达式有时很困难, 甚至不可能, 限制了其应用的广泛性. 而Monte Carlo EM算法很好地解决了这个问题, 将EM算法中E步的积分用Monte Carlo模拟来有效实现, 使其适用性大大增强. 但无论是EM算法, 还是Monte Carlo EM算法, 其收敛速度都是线性的, 被缺损信息的倒数所控制, 当缺损数据的比例很高时, 收敛速度就非常缓慢. 而Newton-Raphson算法在后验众数的附近具有二次收敛速率. 本文提出Monte Carlo EM加速算法, 将Monte Carlo EM算法与Newton-Raphson算法结合, 既使得EM算法中的E步用Monte Carlo模拟得以实现, 又证明了该算法在后验众数附近具有二次收敛速度. 从而使其保留了Monte Carlo EM算法的优点, 并改进了Monte Carlo EM算法的收敛速度. 本文通过数值例子, 将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较, 进一步说明了Monte Carlo EM加速算法的优良性.  相似文献   

17.
基于经验似然的贝叶斯计算方法(Bayesian Computation with Empirical Likelihood,BCel)对两种随机波动模型SV-N、SV-T模型进行参数估计,数值实验验证了该方法的可行性和有效性,与传统的基于马尔科夫链蒙特卡罗方法进行了对比研究.最后将SV-T模型应用到上证指数上,利用BCel算法得出该模型的参数估计结果.  相似文献   

18.
In this paper, we develop a conditional likelihood based approach for estimating the equilibrium price and shares in markets with differentiated products and oligopoly supply. We model market demand using a discrete choice model with random coefficients and random utility. For most applications, the likelihood function of equilibrium prices and shares is intractable and cannot be directly analyzed. To overcome this, we develop a Markov Chain Monte Carlo simulation strategy to estimate parameters and distributions. To illustrate our methodology, we generate a dataset of prices and quantities simulated from a differentiated goods oligopoly across a number of markets. We apply our methodology to this dataset to demonstrate its attractive features as well as its accuracy and validity. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
We develop new methodology for estimation of general class of term structure models based on a Monte Carlo filtering approach. We utilize the generalized state space model which can be naturally applied to the estimation of the term structure models based on the Markov state processes. It is also possible to introduce measurement errors in the general way without any bias. Moreover, the Monte Carlo filter can be applied even to the models in which the zero-coupon bonds' prices can not be analytically obtained. As an example, we apply the method to LIBORs (London Inter Bank Offered Rates) and interest rates swaps in the Japanese market and show the usefulness of our approach.  相似文献   

20.
Hamiltonian Monte Carlo (HMC) has been progressively incorporated within the statistician’s toolbox as an alternative sampling method in settings when standard Metropolis–Hastings is inefficient. HMC generates a Markov chain on an augmented state space with transitions based on a deterministic differential flow derived from Hamiltonian mechanics. In practice, the evolution of Hamiltonian systems cannot be solved analytically, requiring numerical integration schemes. Under numerical integration, the resulting approximate solution no longer preserves the measure of the target distribution, therefore an accept–reject step is used to correct the bias. For doubly intractable distributions—such as posterior distributions based on Gibbs random fields—HMC suffers from some computational difficulties: computation of gradients in the differential flow and computation of the accept–reject proposals poses difficulty. In this article, we study the behavior of HMC when these quantities are replaced by Monte Carlo estimates. Supplemental codes for implementing methods used in the article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号