首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

In this article we discuss the problem of assessing the performance of Markov chain Monte Carlo (MCMC) algorithms on the basis of simulation output. In essence, we extend the original ideas of Gelman and Rubin and, more recently, Brooks and Gelman, to problems where we are able to split the variation inherent within the MCMC simulation output into two distinct groups. We show how such a diagnostic may be useful in assessing the performance of MCMC samplers addressing model choice problems, such as the reversible jump MCMC algorithm. In the model choice context, we show how the reversible jump MCMC simulation output for parameters that retain a coherent interpretation throughout the simulation, can be used to assess convergence. By considering various decompositions of the sampling variance of this parameter, we can assess the performance of our MCMC sampler in terms of its mixing properties both within and between models and we illustrate our approach in both the graphical Gaussian models and normal mixtures context. Finally, we provide an example of the application of our diagnostic to the assessment of the influence of different starting values on MCMC simulation output, thereby illustrating the wider utility of our method beyond the Bayesian model choice and reversible jump MCMC context.  相似文献   

2.
This article considers Markov chain computational methods for incorporating uncertainty about the dimension of a parameter when performing inference within a Bayesian setting. A general class of methods is proposed for performing such computations, based upon a product space representation of the problem which is similar to that of Carlin and Chib. It is shown that all of the existing algorithms for incorporation of model uncertainty into Markov chain Monte Carlo (MCMC) can be derived as special cases of this general class of methods. In particular, we show that the popular reversible jump method is obtained when a special form of Metropolis–Hastings (M–H) algorithm is applied to the product space. Furthermore, the Gibbs sampling method and the variable selection method are shown to derive straightforwardly from the general framework. We believe that these new relationships between methods, which were until now seen as diverse procedures, are an important aid to the understanding of MCMC model selection procedures and may assist in the future development of improved procedures. Our discussion also sheds some light upon the important issues of “pseudo-prior” selection in the case of the Carlin and Chib sampler and choice of proposal distribution in the case of reversible jump. Finally, we propose efficient reversible jump proposal schemes that take advantage of any analytic structure that may be present in the model. These proposal schemes are compared with a standard reversible jump scheme for the problem of model order uncertainty in autoregressive time series, demonstrating the improvements which can be achieved through careful choice of proposals.  相似文献   

3.
Markov chain Monte Carlo (MCMC) methods for Bayesian computation are mostly used when the dominating measure is the Lebesgue measure, the counting measure, or a product of these. Many Bayesian problems give rise to distributions that are not dominated by the Lebesgue measure or the counting measure alone. In this article we introduce a simple framework for using MCMC algorithms in Bayesian computation with mixtures of mutually singular distributions. The idea is to find a common dominating measure that allows the use of traditional Metropolis-Hastings algorithms. In particular, using our formulation, the Gibbs sampler can be used whenever the full conditionals are available. We compare our formulation with the reversible jump approach and show that the two are closely related. We give results for three examples, involving testing a normal mean, variable selection in regression, and hypothesis testing for differential gene expression under multiple conditions. This allows us to compare the three methods considered: Metropolis-Hastings with mutually singular distributions, Gibbs sampler with mutually singular distributions, and reversible jump. In our examples, we found the Gibbs sampler to be more precise and to need considerably less computer time than the other methods. In addition, the full conditionals used in the Gibbs sampler can be used to further improve the estimates of the model posterior probabilities via Rao-Blackwellization, at no extra cost.  相似文献   

4.
Implementations of the Monte Carlo EM Algorithm   总被引:1,自引:0,他引:1  
The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm where the expectation in the E-step is computed numerically through Monte Carlo simulations. The most exible and generally applicable approach to obtaining a Monte Carlo sample in each iteration of an MCEM algorithm is through Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis–Hastings samplers. Although MCMC estimation presents a tractable solution to problems where the E-step is not available in closed form, two issues arise when implementing this MCEM routine: (1) how do we minimize the computational cost in obtaining an MCMC sample? and (2) how do we choose the Monte Carlo sample size? We address the first question through an application of importance sampling whereby samples drawn during previous EM iterations are recycled rather than running an MCMC sampler each MCEM iteration. The second question is addressed through an application of regenerative simulation. We obtain approximate independent and identical samples by subsampling the generated MCMC sample during different renewal periods. Standard central limit theorems may thus be used to gauge Monte Carlo error. In particular, we apply an automated rule for increasing the Monte Carlo sample size when the Monte Carlo error overwhelms the EM estimate at any given iteration. We illustrate our MCEM algorithm through analyses of two datasets fit by generalized linear mixed models. As a part of these applications, we demonstrate the improvement in computational cost and efficiency of our routine over alternative MCEM strategies.  相似文献   

5.
We develop particle Gibbs samplers for static-parameter estimation in discretely observed piecewise deterministic process (PDPs). PDPs are stochastic processes that jump randomly at a countable number of stopping times but otherwise evolve deterministically in continuous time. A sequential Monte Carlo (SMC) sampler for filtering in PDPs has recently been proposed. We first provide new insight into the consequences of an approximation inherent within that algorithm. We then derive a new representation of the algorithm. It simplifies ensuring that the importance weights exist and also allows the use of variance-reduction techniques known as backward and ancestor sampling. Finally, we propose a novel Gibbs step that improves mixing in particle Gibbs samplers whose SMC algorithms make use of large collections of auxiliary variables, such as many instances of SMC samplers. We provide a comparison between the two particle Gibbs samplers for PDPs developed in this paper. Simulation results indicate that they can outperform reversible-jump MCMC approaches.  相似文献   

6.
Markov chain Monte Carlo (MCMC) algorithms offer a very general approach for sampling from arbitrary distributions. However, designing and tuning MCMC algorithms for each new distribution can be challenging and time consuming. It is particularly difficult to create an efficient sampler when there is strong dependence among the variables in a multivariate distribution. We describe a two-pronged approach for constructing efficient, automated MCMC algorithms: (1) we propose the “factor slice sampler,” a generalization of the univariate slice sampler where we treat the selection of a coordinate basis (factors) as an additional tuning parameter, and (2) we develop an approach for automatically selecting tuning parameters to construct an efficient factor slice sampler. In addition to automating the factor slice sampler, our tuning approach also applies to the standard univariate slice samplers. We demonstrate the efficiency and general applicability of our automated MCMC algorithm with a number of illustrative examples. This article has online supplementary materials.  相似文献   

7.
Poisson change-point models have been widely used for modelling inhomogeneous time-series of count data. There are a number of methods available for estimating the parameters in these models using iterative techniques such as MCMC. Many of these techniques share the common problem that there does not seem to be a definitive way of knowing the number of iterations required to obtain sufficient convergence. In this paper, we show that the Gibbs sampler of the Poisson change-point model is geometrically ergodic. Establishing geometric ergodicity is crucial from a practical point of view as it implies the existence of a Markov chain central limit theorem, which can be used to obtain standard error estimates. We prove that the transition kernel is a trace-class operator, which implies geometric ergodicity of the sampler. We then provide a useful application of the sampler to a model for the quarterly driver fatality counts for the state of Victoria, Australia.  相似文献   

8.
Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.  相似文献   

9.
半参数再生散度模型是再生散度模型和半参数回归模型的推广,包括了半参数广义线性模型和广义部分线性模型等特殊类型.讨论的是该模型在响应变量和协变量均存在非随机缺失数据情形下参数的Bayes估计和基于Bayes因子的模型选择问题.在分析中,采用了惩罚样条来估计模型中的非参数成分,并建立了Bayes层次模型;为了解决Gibbs抽样过程中因参数高度相关带来的混合性差以及因维数增加导致出现不稳定性的问题,引入了潜变量做为添加数据并应用了压缩Gibbs抽样方法,改进了收敛性;同时,为了避免计算多重积分,利用了M-H算法估计边缘密度函数后计算Bayes因子,为模型的选择比较提供了一种准则.最后,通过模拟和实例验证了所给方法的有效性.  相似文献   

10.
The Bradley–Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behavior, chess ranking, and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons, and random graphs. From a computational point of view, Hunter has proposed efficient iterative minorization-maximization (MM) algorithms to perform maximum likelihood estimation for these generalized Bradley–Terry models whereas Bayesian inference is typically performed using Markov chain Monte Carlo algorithms based on tailored Metropolis–Hastings proposals. We show here that these MM algorithms can be reinterpreted as special instances of expectation-maximization algorithms associated with suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications.  相似文献   

11.
Abstract

This article presents Bayesian inference for exponential mixtures, including the choice of a noninformative prior based on a location-scale reparameterization of the mixture. Adapted control sheets are proposed for studying the convergence of the associated Gibbs sampler. They exhibit a strong lack of stability in the allocations of the observations to the different components of the mixture. The setup is extended to the case when the number of components in the mixture is unknown and a reversible jump MCMC technique is implemented. The results are illustrated on simulations and a real dataset.  相似文献   

12.
Label switching is a well-known problem in the Bayesian analysis of mixture models. On the one hand, it complicates inference, and on the other hand, it has been perceived as a prerequisite to justify Markov chain Monte Carlo (MCMC) convergence. As a result, nonstandard MCMC algorithms that traverse the symmetric copies of the posterior distribution, and possibly genuine modes, have been proposed. To perform component-specific inference, methods to undo the label switching and to recover the interpretation of the components need to be applied. If latent allocations for the design of the MCMC strategy are included, and the sampler has converged, then labels assigned to each component may change from iteration to iteration. However, observations being allocated together must remain similar, and we use this fundamental fact to derive an easy and efficient solution to the label switching problem. We compare our strategy with other relabeling algorithms on univariate and multivariate data examples and demonstrate improvements over alternative strategies. Supplementary materials for this article are available online.  相似文献   

13.
This paper introduces a new and computationally efficient Markov chain Monte Carlo (MCMC) estimation algorithm for the Bayesian analysis of zero, one, and zero and one inflated beta regression models. The algorithm is computationally efficient in the sense that it has low MCMC autocorrelations and computational time. A simulation study shows that the proposed algorithm outperforms the slice sampling and random walk Metropolis–Hastings algorithms in both small and large sample settings. An empirical illustration on a loss given default banking model demonstrates the usefulness of the proposed algorithm.  相似文献   

14.
Abstract

This article introduces a general method for Bayesian computing in richly parameterized models, structured Markov chain Monte Carlo (SMCMC), that is based on a blocked hybrid of the Gibbs sampling and Metropolis—Hastings algorithms. SMCMC speeds algorithm convergence by using the structure that is present in the problem to suggest an appropriate Metropolis—Hastings candidate distribution. Although the approach is easiest to describe for hierarchical normal linear models, we show that its extension to both nonnormal and nonlinear cases is straightforward. After describing the method in detail we compare its performance (in terms of run time and autocorrelation in the samples) to other existing methods, including the single-site updating Gibbs sampler available in the popular BUGS software package. Our results suggest significant improvements in convergence for many problems using SMCMC, as well as broad applicability of the method, including previously intractable hierarchical nonlinear model settings.  相似文献   

15.
本文研究泊松逆高斯回归模型的贝叶斯统计推断.基于应用Gibbs抽样,Metropolis-Hastings算法以及Multiple-Try Metropolis算法等MCMC统计方法计算模型未知参数和潜变量的联合贝叶斯估计,并引入两个拟合优度统计量来评价提出的泊松逆高斯回归模型的合理性.若干模拟研究与一个实证分析说明方法的可行性.  相似文献   

16.
The partially collapsed Gibbs (PCG) sampler offers a new strategy for improving the convergence of a Gibbs sampler. PCG achieves faster convergence by reducing the conditioning in some of the draws of its parent Gibbs sampler. Although this can significantly improve convergence, care must be taken to ensure that the stationary distribution is preserved. The conditional distributions sampled in a PCG sampler may be incompatible and permuting their order may upset the stationary distribution of the chain. Extra care must be taken when Metropolis-Hastings (MH) updates are used in some or all of the updates. Reducing the conditioning in an MH within Gibbs sampler can change the stationary distribution, even when the PCG sampler would work perfectly if MH were not used. In fact, a number of samplers of this sort that have been advocated in the literature do not actually have the target stationary distributions. In this article, we illustrate the challenges that may arise when using MH within a PCG sampler and develop a general strategy for using such updates while maintaining the desired stationary distribution. Theoretical arguments provide guidance when choosing between different MH within PCG sampling schemes. Finally, we illustrate the MH within PCG sampler and its computational advantage using several examples from our applied work.  相似文献   

17.

The emergence of big data has led to so-called convergence complexity analysis, which is the study of how Markov chain Monte Carlo (MCMC) algorithms behave as the sample size, n, and/or the number of parameters, p, in the underlying data set increase. This type of analysis is often quite challenging, in part because existing results for fixed n and p are simply not sharp enough to yield good asymptotic results. One of the first convergence complexity results for an MCMC algorithm on a continuous state space is due to Yang and Rosenthal (2019), who established a mixing time result for a Gibbs sampler (for a simple Bayesian random effects model) that was introduced and studied by Rosenthal (Stat Comput 6:269–275, 1996). The asymptotic behavior of the spectral gap of this Gibbs sampler is, however, still unknown. We use a recently developed simulation technique (Qin et al. Electron J Stat 13:1790–1812, 2019) to provide substantial numerical evidence that the gap is bounded away from 0 as n → ∞. We also establish a pair of rigorous convergence complexity results for two different Gibbs samplers associated with a generalization of the random effects model considered by Rosenthal (Stat Comput 6:269–275, 1996). Our results show that, under a strong growth condition, the spectral gaps of these Gibbs samplers converge to 1 as the sample size increases.

  相似文献   

18.
This article provides a new theory for the analysis of the particle Gibbs (PG) sampler (Andrieu et al., 2010). Following the work of Del Moral and Jasra (2017) we provide some analysis of the particle Gibbs sampler, giving first order expansions of the kernel and minorization estimates. In addition, first order propagation of chaos estimates are derived for empirical measures of the dual particle model with a frozen path, also known as the conditional sequential Monte Carlo (SMC) update of the PG sampler. Backward and forward PG samplers are discussed, including a first comparison of the contraction estimates obtained by first order estimates. We illustrate our results with an example of fixed parameter estimation arising in hidden Markov models.  相似文献   

19.
We establish an ordering criterion for the asymptotic variances of two consistent Markov chain Monte Carlo (MCMC) estimators: an importance sampling (IS) estimator, based on an approximate reversible chain and subsequent IS weighting, and a standard MCMC estimator, based on an exact reversible chain. Essentially, we relax the criterion of the Peskun type covariance ordering by considering two different invariant probabilities, and obtain, in place of a strict ordering of asymptotic variances, a bound of the asymptotic variance of IS by that of the direct MCMC. Simple examples show that IS can have arbitrarily better or worse asymptotic variance than Metropolis–Hastings and delayed-acceptance (DA) MCMC. Our ordering implies that IS is guaranteed to be competitive up to a factor depending on the supremum of the (marginal) IS weight. We elaborate upon the criterion in case of unbiased estimators as part of an auxiliary variable framework. We show how the criterion implies asymptotic variance guarantees for IS in terms of pseudo-marginal (PM) and DA corrections, essentially if the ratio of exact and approximate likelihoods is bounded. We also show that convergence of the IS chain can be less affected by unbounded high-variance unbiased estimators than PM and DA chains.  相似文献   

20.
This paper is concerned with parameter estimation in linear and non-linear Itô type stochastic differential equations using Markov chain Monte Carlo (MCMC) methods. The MCMC methods studied in this paper are the Metropolis–Hastings and Hamiltonian Monte Carlo (HMC) algorithms. In these kind of models, the computation of the energy function gradient needed by HMC and gradient based optimization methods is non-trivial, and here we show how the gradient can be computed with a linear or non-linear Kalman filter-like recursion. We shall also show how in the linear case the differential equations in the gradient recursion equations can be solved using the matrix fraction decomposition. Numerical results for simulated examples are presented and discussed in detail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号