首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The partially collapsed Gibbs (PCG) sampler offers a new strategy for improving the convergence of a Gibbs sampler. PCG achieves faster convergence by reducing the conditioning in some of the draws of its parent Gibbs sampler. Although this can significantly improve convergence, care must be taken to ensure that the stationary distribution is preserved. The conditional distributions sampled in a PCG sampler may be incompatible and permuting their order may upset the stationary distribution of the chain. Extra care must be taken when Metropolis-Hastings (MH) updates are used in some or all of the updates. Reducing the conditioning in an MH within Gibbs sampler can change the stationary distribution, even when the PCG sampler would work perfectly if MH were not used. In fact, a number of samplers of this sort that have been advocated in the literature do not actually have the target stationary distributions. In this article, we illustrate the challenges that may arise when using MH within a PCG sampler and develop a general strategy for using such updates while maintaining the desired stationary distribution. Theoretical arguments provide guidance when choosing between different MH within PCG sampling schemes. Finally, we illustrate the MH within PCG sampler and its computational advantage using several examples from our applied work.  相似文献   

2.
We consider the Bayesian analysis of constrained parameter and truncated data problems within a Gibbs sampling framework and concentrate on sampling truncated densities that arise as full conditional densities within the context of the Gibbs sampler. In particular, we restrict attention to the normal, beta, and gamma densities. We demonstrate that, in many instances, it is possible to introduce a latent variable which facilitates an easy solution to the problem. We also discuss a novel approach to sampling truncated densities via a “black-box” algorithm, based on the latent variable idea, valid outside of the context of a Gibbs sampler.  相似文献   

3.
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model with normal base measure, Gibbs samplingalgorithms based on the Pólya urn scheme are often used to simulate posterior draws in conjugate models (essentially, linear regression models and models for binary outcomes). In the nonconjugate case, some common problems associated with existing simulation algorithms include convergence and mixing difficulties.

This article proposes an algorithm for MDP models with exponential family likelihoods and normal base measures. The algorithm proceeds by making a Laplace approximation to the likelihood function, thereby matching the proposal with that of the Gibbs sampler. The proposal is accepted or rejected via a Metropolis-Hastings step. For conjugate MDP models, the algorithm is identical to the Gibbs sampler. The performance of the technique is investigated using a Poisson regression model with semi-parametric random effects. The algorithm performs efficiently and reliably, even in problems where large-sample results do not guarantee the success of the Laplace approximation. This is demonstrated by a simulation study where most of the count data consist of small numbers. The technique is associated with substantial benefits relative to existing methods, both in terms of convergence properties and computational cost.  相似文献   

4.
Summary  The Gibbs sampler, being a popular routine amongst Markov chain Monte Carlo sampling methodologies, has revolutionized the application of Monte Carlo methods in statistical computing practice. The performance of the Gibbs sampler relies heavily on the choice of sweep strategy, that is, the means by which the components or blocks of the random vector X of interest are visited and updated. We develop an automated, adaptive algorithm for implementing the optimal sweep strategy as the Gibbs sampler traverses the sample space. The decision rules through which this strategy is chosen are based on convergence properties of the induced chain and precision of statistical inferences drawn from the generated Monte Carlo samples. As part of the development, we analytically derive closed form expressions for the decision criteria of interest and present computationally feasible implementations of the adaptive random scan Gibbs sampler via a Gaussian approximation to the target distribution. We illustrate the results and algorithms presented by using the adaptive random scan Gibbs sampler developed to sample multivariate Gaussian target distributions, and screening test and image data. Research by RL and ZY supported in part by a US National Science Foundation FRG grant 0139948 and a grant from Lawrence Livermore National Laboratory, Livermore, California, USA.  相似文献   

5.
The multivariate linear mixed model (MLMM) has become the most widely used tool for analyzing multi-outcome longitudinal data. Although it offers great flexibility for modeling the between- and within-subject correlation among multi-outcome repeated measures, the underlying normality assumption is vulnerable to potential atypical observations. We present a fully Bayesian approach to the multivariate t linear mixed model (MtLMM), which is a robust extension of MLMM with the random effects and errors jointly distributed as a multivariate t distribution. Owing to the introduction of too many hidden variables in the model, the conventional Markov chain Monte Carlo (MCMC) method may converge painfully slowly and thus fails to provide valid inference. To alleviate this problem, a computationally efficient inverse Bayes formulas (IBF) sampler coupled with the Gibbs scheme, called the IBF-Gibbs sampler, is developed and shown to be effective in drawing samples from the target distributions. The issues related to model determination and Bayesian predictive inference for future values are also investigated. The proposed methodologies are illustrated with a real example from an AIDS clinical trial and a careful simulation study.  相似文献   

6.
Abstract

We consider the performance of three Monte Carlo Markov-chain samplers—the Gibbs sampler, which cycles through coordinate directions; the Hit-and-Run (H&R) sampler, which randomly moves in any direction; and the Metropolis sampler, which moves with a probability that is a ratio of likelihoods. We obtain several analytical results. We provide a sufficient condition of the geometric convergence on a bounded region S for the H&R sampler. For a general region S, we review the Schervish and Carlin sufficient geometric convergence condition for the Gibbs sampler. We show that for a multivariate normal distribution this Gibbs sufficient condition holds and for a bivariate normal distribution the Gibbs marginal sample paths are each an AR(1) process, and we obtain the standard errors of sample means and sample variances, which we later use to verify empirical Monte Carlo results. We empirically compare the Gibbs and H&R samplers on bivariate normal examples. For zero correlation, the Gibbs sampler provides independent data, resulting in better performance than H&R. As the absolute value of the correlation increases, H&R performance improves, with H&R substantially better for correlations above .9. We also suggest and study methods for choosing the number of replications, for estimating the standard error of point estimators and for reducing point-estimator variance. We suggest using a single long run instead of using multiple iid separate runs. We suggest using overlapping batch statistics (obs) to get the standard errors of estimates; additional empirical results show that obs is accurate. Finally, we review the geometric convergence of the Metropolis algorithm and develop a Metropolisized H&R sampler. This sampler works well for high-dimensional and complicated integrands or Bayesian posterior densities.  相似文献   

7.
A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis–Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.  相似文献   

8.
Abstract

The problem of finding marginal distributions of multidimensional random quantities has many applications in probability and statistics. Many of the solutions currently in use are very computationally intensive. For example, in a Bayesian inference problem with a hierarchical prior distribution, one is often driven to multidimensional numerical integration to obtain marginal posterior distributions of the model parameters of interest. Recently, however, a group of Monte Carlo integration techniques that fall under the general banner of successive substitution sampling (SSS) have proven to be powerful tools for obtaining approximate answers in a very wide variety of Bayesian modeling situations. Answers may also be obtained at low cost, both in terms of computer power and user sophistication. Important special cases of SSS include the “Gibbs sampler” described by Gelfand and Smith and the “IP algorithm” described by Tanner and Wong. The major problem plaguing users of SSS is the difficulty in ascertaining when “convergence” of the algorithm has been obtained. This problem is compounded by the fact that what is produced by the sampler is not the functional form of the desired marginal posterior distribution, but a random sample from this distribution. This article gives a general proof of the convergence of SSS and the sufficient conditions for both strong and weak convergence, as well as a convergence rate. We explore the connection between higher-order eigenfunctions of the transition operator and accelerated convergence via good initial distributions. We also provide asymptotic results for the sampling component of the error in estimating the distributions of interest. Finally, we give two detailed examples from familiar exponential family settings to illustrate the theory.  相似文献   

9.
Based on the convergence rate defined by the Pearson-χ~2 distance,this pa- per discusses properties of different Gibbs sampling schemes.Under a set of regularity conditions,it is proved in this paper that the rate of convergence on systematic scan Gibbs samplers is the norm of a forward operator.We also discuss that the collapsed Gibbs sam- pler has a faster convergence rate than the systematic scan Gibbs sampler as proposed by Liu et al.Based on the definition of convergence rate of the Pearson-χ~2 distance, this paper proved this result quantitatively.According to Theorem 2,we also proved that the convergence rate defined with the spectral radius of matrix by Robert and Shau is equivalent to the corresponding radius of the forward operator.  相似文献   

10.
Two noniterative algorithms for computing posteriors   总被引:1,自引:0,他引:1  
In this paper, we first propose a noniterative sampling method to obtain an i.i.d. sample approximately from posteriors by combining the inverse Bayes formula, sampling/importance resampling and posterior mode estimates. We then propose a new exact algorithm to compute posteriors by improving the PMDA-Exact using the sampling-wise IBF. If the posterior mode is available from the EM algorithm, then these two algorithms compute posteriors well and eliminate the convergence problem of Markov Chain Monte Carlo methods. We show good performances of our methods by some examples.  相似文献   

11.
Abstract

This article introduces a general method for Bayesian computing in richly parameterized models, structured Markov chain Monte Carlo (SMCMC), that is based on a blocked hybrid of the Gibbs sampling and Metropolis—Hastings algorithms. SMCMC speeds algorithm convergence by using the structure that is present in the problem to suggest an appropriate Metropolis—Hastings candidate distribution. Although the approach is easiest to describe for hierarchical normal linear models, we show that its extension to both nonnormal and nonlinear cases is straightforward. After describing the method in detail we compare its performance (in terms of run time and autocorrelation in the samples) to other existing methods, including the single-site updating Gibbs sampler available in the popular BUGS software package. Our results suggest significant improvements in convergence for many problems using SMCMC, as well as broad applicability of the method, including previously intractable hierarchical nonlinear model settings.  相似文献   

12.
We propose a flexible class of models based on scale mixture of uniform distributions to construct shrinkage priors for covariance matrix estimation. This new class of priors enjoys a number of advantages over the traditional scale mixture of normal priors, including its simplicity and flexibility in characterizing the prior density. We also exhibit a simple, easy to implement Gibbs sampler for posterior simulation, which leads to efficient estimation in high-dimensional problems. We first discuss the theory and computational details of this new approach and then extend the basic model to a new class of multivariate conditional autoregressive models for analyzing multivariate areal data. The proposed spatial model flexibly characterizes both the spatial and the outcome correlation structures at an appealing computational cost. Examples consisting of both synthetic and real-world data show the utility of this new framework in terms of robust estimation as well as improved predictive performance. Supplementary materials are available online.  相似文献   

13.
This article aims to provide a method for approximately predetermining convergence properties of the Gibbs sampler. This is to be done by first finding an approximate rate of convergence for a normal approximation of the target distribution. The rates of convergence for different implementation strategies of the Gibbs sampler are compared to find the best one. In general, the limiting convergence properties of the Gibbs sampler on a sequence of target distributions (approaching a limit) are not the same as the convergence properties of the Gibbs sampler on the limiting target distribution. Theoretical results are given in this article to justify that under conditions, the convergence properties of the Gibbs sampler can be approximated as well. A number of practical examples are given for illustration.  相似文献   

14.
The Gibbs sampler is a popular Markov chain Monte Carlo routine for generating random variates from distributions otherwise difficult to sample. A number of implementations are available for running a Gibbs sampler varying in the order through which the full conditional distributions used by the Gibbs sampler are cycled or visited. A common, and in fact the original, implementation is the random scan strategy, whereby the full conditional distributions are updated in a randomly selected order each iteration. In this paper, we introduce a random scan Gibbs sampler which adaptively updates the selection probabilities or “learns” from all previous random variates generated during the Gibbs sampling. In the process, we outline a number of variations on the random scan Gibbs sampler which allows the practitioner many choices for setting the selection probabilities and prove convergence of the induced (Markov) chain to the stationary distribution of interest. Though we emphasize flexibility in user choice and specification of these random scan algorithms, we present a minimax random scan which determines the selection probabilities through decision theoretic considerations on the precision of estimators of interest. We illustrate and apply the results presented by using the adaptive random scan Gibbs sampler developed to sample from multivariate Gaussian target distributions, to automate samplers for posterior simulation under Dirichlet process mixture models, and to fit mixtures of distributions.  相似文献   

15.
ABSTRACT

This paper establishes explicit estimates of convergence rates for the blocked Gibbs sampler with random scan under the Dobrushin conditions. The estimates of convergence in the Wasserstein metric are obtained by taking purely analytic approaches.  相似文献   

16.
In Bayesian analysis, the Markov Chain Monte Carlo (MCMC) algorithm is an efficient and simple method to compute posteriors. However, the chain may appear to converge while the posterior is improper, which will leads to incorrect statistical inferences. In this paper, we focus on the necessary and sufficient conditions for which improper hierarchical priors can yield proper posteriors in a multivariate linear model. In addition, we carry out a simulation study to illustrate the theoretical results, in which the Gibbs sampling and Metropolis-Hasting sampling are employed to generate the posteriors.  相似文献   

17.
Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.  相似文献   

18.
非线性再生散度随机效应模型是一类非常广泛的统计模型,包括了线性随机效应模型、非线性随机效应模型、广义线性随机效应模型和指数族非线性随机效应模型等.本文研究非线性再生散度随机效应模型的贝叶斯分析.通过视随机效应为缺失数据以及应用结合Gibbs抽样技术和Metropolis-Hastings算法(简称MH算法)的混合算法获得了模型参数与随机效应的同时贝叶斯估计.最后,用一个模拟研究和一个实际例子说明上述算法的可行眭.  相似文献   

19.
文章在非线性均值方差模型框架下基于K-L距离研究贝叶斯数据删除影响的统计诊断问题,通过应用Gibbs抽样和MH算法估计贝叶斯数据删除影响诊断统计量.随机模拟研究和红鳟鲑鱼数据的数值例子说明该诊断方法的可行性.  相似文献   

20.
Abstract

The ECM and ECME algorithms are generalizations of the EM algorithm in which the maximization (M) step is replaced by several conditional maximization (CM) steps. The order that the CM-steps are performed is trivial to change and generally affects how fast the algorithm converges. Moreover, the same order of CM-steps need not be used at each iteration and in some applications it is feasible to group two or more CM-steps into one larger CM-step. These issues also arise when implementing the Gibbs sampler, and in this article we study them in the context of fitting log-linear and random-effects models with ECM-type algorithms. We find that some standard theoretical measures of the rate of convergence can be of little use in comparing the computational time required, and that common strategies such as using a random ordering may not provide the desired effects. We also develop two algorithms for fitting random-effects models to illustrate that with careful selection of CM-steps, ECM-type algorithms can be substantially faster than the standard EM algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号