首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

This article focuses on improving estimation for Markov chain Monte Carlo simulation. The proposed methodology is based upon the use of importance link functions. With the help of appropriate importance sampling weights, effective estimates of functionals are developed. The method is most easily applied to irreducible Markov chains, where application is typically immediate. An important conceptual point is the applicability of the method to reducible Markov chains through the use of many-to-many importance link functions. Applications discussed include estimation of marginal genotypic probabilities for pedigree data, estimation for models with and without influential observations, and importance sampling for a target distribution with thick tails.  相似文献   

2.
Markov chain Monte Carlo (MCMC) methods for Bayesian computation are mostly used when the dominating measure is the Lebesgue measure, the counting measure, or a product of these. Many Bayesian problems give rise to distributions that are not dominated by the Lebesgue measure or the counting measure alone. In this article we introduce a simple framework for using MCMC algorithms in Bayesian computation with mixtures of mutually singular distributions. The idea is to find a common dominating measure that allows the use of traditional Metropolis-Hastings algorithms. In particular, using our formulation, the Gibbs sampler can be used whenever the full conditionals are available. We compare our formulation with the reversible jump approach and show that the two are closely related. We give results for three examples, involving testing a normal mean, variable selection in regression, and hypothesis testing for differential gene expression under multiple conditions. This allows us to compare the three methods considered: Metropolis-Hastings with mutually singular distributions, Gibbs sampler with mutually singular distributions, and reversible jump. In our examples, we found the Gibbs sampler to be more precise and to need considerably less computer time than the other methods. In addition, the full conditionals used in the Gibbs sampler can be used to further improve the estimates of the model posterior probabilities via Rao-Blackwellization, at no extra cost.  相似文献   

3.
Abstract

This article proposes alternative methods for constructing estimators from accept-reject samples by incorporating the variables rejected by the algorithm. The resulting estimators are quick to compute, and turn out to be variations of importance sampling estimators, although their derivations are quite different. We show that these estimators are superior asymptotically to the classical accept-reject estimator, which ignores the rejected variables. In addition, we consider the issue of rescaling of estimators, a topic that has implications beyond accept-reject and importance sampling. We show how rescaling can improve an estimator and illustrate the domination of the standard importance sampling techniques in different setups.  相似文献   

4.
Abstract

This article introduces a general method for Bayesian computing in richly parameterized models, structured Markov chain Monte Carlo (SMCMC), that is based on a blocked hybrid of the Gibbs sampling and Metropolis—Hastings algorithms. SMCMC speeds algorithm convergence by using the structure that is present in the problem to suggest an appropriate Metropolis—Hastings candidate distribution. Although the approach is easiest to describe for hierarchical normal linear models, we show that its extension to both nonnormal and nonlinear cases is straightforward. After describing the method in detail we compare its performance (in terms of run time and autocorrelation in the samples) to other existing methods, including the single-site updating Gibbs sampler available in the popular BUGS software package. Our results suggest significant improvements in convergence for many problems using SMCMC, as well as broad applicability of the method, including previously intractable hierarchical nonlinear model settings.  相似文献   

5.
Importance sampling methods can be iterated like MCMC algorithms, while being more robust against dependence and starting values. The population Monte Carlo principle consists of iterated generations of importance samples, with importance functions depending on the previously generated importance samples. The advantage over MCMC algorithms is that the scheme is unbiased at any iteration and can thus be stopped at any time, while iterations improve the performances of the importance function, thus leading to an adaptive importance sampling. We illustrate this method on a mixture example with multiscale importance functions. A second example reanalyzes the ion channel model using an importance sampling scheme based on a hidden Markov representation, and compares population Monte Carlo with a corresponding MCMC algorithm.  相似文献   

6.
Abstract

The so-called “Rao-Blackwellized” estimators proposed by Gelfand and Smith do not always reduce variance in Markov chain Monte Carlo when the dependence in the Markov chain is taken into account. An illustrative example is given, and a theorem characterizing the necessary and sufficient condition for such an estimator to always reduce variance is proved.  相似文献   

7.
Abstract

Markov chain Monte Carlo (MCMC) methods are currently enjoying a surge of interest within the statistical community. The goal of this work is to formalize and support two distinct adaptive strategies that typically accelerate the convergence of an MCMC algorithm. One approach is through resampling; the other incorporates adaptive switching of the transition kernel. Support is both by analytic arguments and simulation study. Application is envisioned in low-dimensional but nontrivial problems. Two pathological illustrations are presented. Connections with reparameterization are discussed as well as possible difficulties with infinitely often adaptation.  相似文献   

8.
While studying various features of the posterior distribution of a vector-valued parameter using an MCMC sample, a subsample is often all that is available for analysis. The goal of benchmark estimation is to use the best available information, that is, the full MCMC sample, to improve future estimates made on the basis of the subsample. We discuss a simple approach to do this and provide a theoretical basis for the method. The methodology and benefits of benchmark estimation are illustrated using a well-known example from the literature. We obtain nearly a 90% reduction in MSE with the technique based on a 1-in-10 subsample and show that greater benefits accrue with the thinner subsamples that are often used in practice.  相似文献   

9.
The problem of clustering a group of observations according to some objective function (e.g., K-means clustering, variable selection) or a density (e.g., posterior from a Dirichlet process mixture model prior) can be cast in the framework of Monte Carlo sampling for cluster indicators. We propose a new method called the evolutionary Monte Carlo clustering (EMCC) algorithm, in which three new “crossover moves,” based on swapping and reshuffling sub cluster intersections, are proposed. We apply the EMCC algorithm to several clustering problems including Bernoulli clustering, biological sequence motif clustering, BIC based variable selection, and mixture of normals clustering. We compare EMCC's performance both as a sampler and as a stochastic optimizer with Gibbs sampling, “split-merge” Metropolis–Hastings algorithms, K-means clustering, and the MCLUST algorithm.  相似文献   

10.
We consider a modified version of the de Finetti model in insurance risk theory in which, when surpluses become negative the company has the possibility of borrowing, and thus continue its operation. For this model we examine the problem of estimating the time-in-the red over a finite horizon via simulation. We propose a smoothed estimator based on a conditioning argument which is very simple to implement as well as particularly efficient, especially when the claim distribution is heavy tailed. We establish unbiasedness for this estimator and show that its variance is lower than the naïve estimator based on counts. Finally we present a number of simulation results showing that the smoothed estimator has variance which is often significantly lower than that of the naïve Monte-Carlo estimator.  相似文献   

11.
鉴于美式期权的定价具有后向迭代搜索特征,本文结合Longstaff和Schwartz提出的美式期权定价的最小二乘模拟方法,研究基于马尔科夫链蒙特卡洛算法对回归方程系数的估计,实现对美式期权的双重模拟定价.通过对无红利美式看跌股票期权定价进行大量实证模拟,从期权价值定价误差等方面同著名的最小二乘蒙特卡洛模拟方法进行对比分析,结果表明基于MCMC回归算法给出的美式期权定价具有更高的精确度.模拟实证结果表明本文提出的对美式期权定价方法具有较好的可行性、有效性与广泛的适用性.该方法的不足之处就是类似于一般的蒙特卡洛方法,会使得求解的计算量有所加大.  相似文献   

12.
Implementations of the Monte Carlo EM Algorithm   总被引:1,自引:0,他引:1  
The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm where the expectation in the E-step is computed numerically through Monte Carlo simulations. The most exible and generally applicable approach to obtaining a Monte Carlo sample in each iteration of an MCEM algorithm is through Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis–Hastings samplers. Although MCMC estimation presents a tractable solution to problems where the E-step is not available in closed form, two issues arise when implementing this MCEM routine: (1) how do we minimize the computational cost in obtaining an MCMC sample? and (2) how do we choose the Monte Carlo sample size? We address the first question through an application of importance sampling whereby samples drawn during previous EM iterations are recycled rather than running an MCMC sampler each MCEM iteration. The second question is addressed through an application of regenerative simulation. We obtain approximate independent and identical samples by subsampling the generated MCMC sample during different renewal periods. Standard central limit theorems may thus be used to gauge Monte Carlo error. In particular, we apply an automated rule for increasing the Monte Carlo sample size when the Monte Carlo error overwhelms the EM estimate at any given iteration. We illustrate our MCEM algorithm through analyses of two datasets fit by generalized linear mixed models. As a part of these applications, we demonstrate the improvement in computational cost and efficiency of our routine over alternative MCEM strategies.  相似文献   

13.
14.
This article discusses design ideas useful in the development of Markov chain Monte Carlo (MCMC) software. Goals of the design are to facilitate analysis of as many statistical models as possible, and to enable users to experiment with different MCMC algorithms as a research tool. These ideas have been used in YADAS, a system written in the Java language, but are also applicable in other object-oriented languages.  相似文献   

15.
Sampling from complex distributions is an important but challenging topic in scientific and statistical computation. We synthesize three ideas, tempering, resampling, and Markov moving, and propose a general framework of resampling Markov chain Monte Carlo (MCMC). This framework not only accommodates various existing algorithms, including resample-move, importance resampling MCMC, and equi-energy sampling, but also leads to a generalized resample-move algorithm. We provide some basic analysis of these algorithms within the general framework, and present three simulation studies to compare these algorithms together with parallel tempering in the difficult situation where new modes emerge in the tails of previous tempering distributions. Our analysis and empirical results suggest that generalized resample-move tends to perform the best among all the algorithms studied when the Markov kernels lead to fast mixing or even locally so toward restricted distributions, whereas parallel tempering tends to perform the best when the Markov kernels lead to slow mixing, without even converging fast to restricted distributions. Moreover, importance resampling MCMC and equi-energy sampling perform similarly to each other, often worse than independence Metropolis resampling MCMC. Therefore, different algorithms seem to have advantages in different settings.  相似文献   

16.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

17.
Abstract

Using a stochastic model for the evolution of discrete characters among a group of organisms, we derive a Markov chain that simulates a Bayesian posterior distribution on the space of dendograms. A transformation of the tree into a canonical cophenetic matrix form, with distinct entries along its superdiagonal, suggests a simple proposal distribution for selecting candidate trees “close” to the current tree in the chain. We apply the consequent Metropolis algorithm to published restriction site data on nine species of plants. The Markov chain mixes well from random starting trees, generating reproducible estimates and confidence sets for the path of evolution.  相似文献   

18.
Likelihood estimation in hierarchical models is often complicated by the fact that the likelihood function involves an analytically intractable integral. Numerical approximation to this integral is an option but it is generally not recommended when the integral dimension is high. An alternative approach is based on the ideas of Monte Carlo integration, which approximates the intractable integral by an empirical average based on simulations. This article investigates the efficiency of two Monte Carlo estimation methods, the Monte Carlo EM (MCEM) algorithm and simulated maximum likelihood (SML). We derive the asymptotic Monte Carlo errors of both methods and show that, even under the optimal SML importance sampling distribution, the efficiency of SML decreases rapidly (relative to that of MCEM) as the missing information about the unknown parameter increases. We illustrate our results in a simple mixed model example and perform a simulation study which shows that, compared to MCEM, SML can be extremely inefficient in practical applications.  相似文献   

19.
20.
We propose a novel class of Sequential Monte Carlo (SMC) algorithms, appropriate for inference in probabilistic graphical models. This class of algorithms adopts a divide-and-conquer approach based upon an auxiliary tree-structured decomposition of the model of interest, turning the overall inferential task into a collection of recursively solved subproblems. The proposed method is applicable to a broad class of probabilistic graphical models, including models with loops. Unlike a standard SMC sampler, the proposed divide-and-conquer SMC employs multiple independent populations of weighted particles, which are resampled, merged, and propagated as the method progresses. We illustrate empirically that this approach can outperform standard methods in terms of the accuracy of the posterior expectation and marginal likelihood approximations. Divide-and-conquer SMC also opens up novel parallel implementation options and the possibility of concentrating the computational effort on the most challenging subproblems. We demonstrate its performance on a Markov random field and on a hierarchical logistic regression problem. Supplementary materials including proofs and additional numerical results are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号