首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tremendous progress has been made in the last two decades in the area of high-dimensional regression, especially in the “large p, small n” setting. Such sample starved settings inevitably lead to models which are potentially very unstable and hence quite unreliable. To this end, Bayesian shrinkage methods have generated a lot of recent interest in the modern high-dimensional regression and model selection context. Such methods span the wide spectrum of modern regression approaches and include among others, spike-and-slab priors, the Bayesian lasso, ridge regression, and global-local shrinkage priors such as the Horseshoe prior and the Dirichlet–Laplace prior. These methods naturally facilitate tractable uncertainty quantification and have thus been used extensively across diverse applications. A common unifying feature of these models is that the corresponding priors on the regression coefficients can be expressed as a scale mixture of normals. This property has been leveraged extensively to develop various three-step Gibbs samplers to explore the corresponding intractable posteriors. The convergence of such samplers however is very slow in high dimensions settings, making them disconnected to the very setting that they are intended to work in. To address this challenge, we propose a comprehensive and unifying framework to draw from the same family of posteriors via a class of tractable and scalable two-step blocked Gibbs samplers. We demonstrate that our proposed class of two-step blocked samplers exhibits vastly superior convergence behavior compared to the original three-step sampler in high-dimensional regimes on simulated data as well as data from a variety of applications including gene expression data, infrared spectroscopy data, and socio-economic/law enforcement data. We also provide a detailed theoretical underpinning to the new method by deriving explicit upper bounds for the (geometric) rate of convergence, and by proving that the proposed two-step sampler has superior spectral properties. Supplementary material for this article is available online.  相似文献   

2.
Based on the convergence rate defined by the Pearson-χ~2 distance,this pa- per discusses properties of different Gibbs sampling schemes.Under a set of regularity conditions,it is proved in this paper that the rate of convergence on systematic scan Gibbs samplers is the norm of a forward operator.We also discuss that the collapsed Gibbs sam- pler has a faster convergence rate than the systematic scan Gibbs sampler as proposed by Liu et al.Based on the definition of convergence rate of the Pearson-χ~2 distance, this paper proved this result quantitatively.According to Theorem 2,we also proved that the convergence rate defined with the spectral radius of matrix by Robert and Shau is equivalent to the corresponding radius of the forward operator.  相似文献   

3.
The Gibbs sampler is a popular Markov chain Monte Carlo routine for generating random variates from distributions otherwise difficult to sample. A number of implementations are available for running a Gibbs sampler varying in the order through which the full conditional distributions used by the Gibbs sampler are cycled or visited. A common, and in fact the original, implementation is the random scan strategy, whereby the full conditional distributions are updated in a randomly selected order each iteration. In this paper, we introduce a random scan Gibbs sampler which adaptively updates the selection probabilities or “learns” from all previous random variates generated during the Gibbs sampling. In the process, we outline a number of variations on the random scan Gibbs sampler which allows the practitioner many choices for setting the selection probabilities and prove convergence of the induced (Markov) chain to the stationary distribution of interest. Though we emphasize flexibility in user choice and specification of these random scan algorithms, we present a minimax random scan which determines the selection probabilities through decision theoretic considerations on the precision of estimators of interest. We illustrate and apply the results presented by using the adaptive random scan Gibbs sampler developed to sample from multivariate Gaussian target distributions, to automate samplers for posterior simulation under Dirichlet process mixture models, and to fit mixtures of distributions.  相似文献   

4.
A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis–Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.  相似文献   

5.
Abstract

This article introduces a general method for Bayesian computing in richly parameterized models, structured Markov chain Monte Carlo (SMCMC), that is based on a blocked hybrid of the Gibbs sampling and Metropolis—Hastings algorithms. SMCMC speeds algorithm convergence by using the structure that is present in the problem to suggest an appropriate Metropolis—Hastings candidate distribution. Although the approach is easiest to describe for hierarchical normal linear models, we show that its extension to both nonnormal and nonlinear cases is straightforward. After describing the method in detail we compare its performance (in terms of run time and autocorrelation in the samples) to other existing methods, including the single-site updating Gibbs sampler available in the popular BUGS software package. Our results suggest significant improvements in convergence for many problems using SMCMC, as well as broad applicability of the method, including previously intractable hierarchical nonlinear model settings.  相似文献   

6.

The emergence of big data has led to so-called convergence complexity analysis, which is the study of how Markov chain Monte Carlo (MCMC) algorithms behave as the sample size, n, and/or the number of parameters, p, in the underlying data set increase. This type of analysis is often quite challenging, in part because existing results for fixed n and p are simply not sharp enough to yield good asymptotic results. One of the first convergence complexity results for an MCMC algorithm on a continuous state space is due to Yang and Rosenthal (2019), who established a mixing time result for a Gibbs sampler (for a simple Bayesian random effects model) that was introduced and studied by Rosenthal (Stat Comput 6:269–275, 1996). The asymptotic behavior of the spectral gap of this Gibbs sampler is, however, still unknown. We use a recently developed simulation technique (Qin et al. Electron J Stat 13:1790–1812, 2019) to provide substantial numerical evidence that the gap is bounded away from 0 as n → ∞. We also establish a pair of rigorous convergence complexity results for two different Gibbs samplers associated with a generalization of the random effects model considered by Rosenthal (Stat Comput 6:269–275, 1996). Our results show that, under a strong growth condition, the spectral gaps of these Gibbs samplers converge to 1 as the sample size increases.

  相似文献   

7.
One of the most widely used samplers in practice is the component-wise Metropolis–Hastings (CMH) sampler that updates in turn the components of a vector-valued Markov chain using accept–reject moves generated from a proposal distribution. When the target distribution of a Markov chain is irregularly shaped, a “good” proposal distribution for one region of the state–space might be a “poor” one for another region. We consider a component-wise multiple-try Metropolis (CMTM) algorithm that chooses from a set of candidate moves sampled from different distributions. The computational efficiency is increased using an adaptation rule for the CMTM algorithm that dynamically builds a better set of proposal distributions as the Markov chain runs. The ergodicity of the adaptive chain is demonstrated theoretically. The performance is studied via simulations and real data examples. Supplementary material for this article is available online.  相似文献   

8.
Jiang and Tanner (2008) consider a method of classification using the Gibbs posterior which is directly constructed from the empirical classification errors. They propose an algorithm to sample from the Gibbs posterior which utilizes a smoothed approximation of the empirical classification error, via a Gibbs sampler with augmented latent variables. In this paper, we note some drawbacks of this algorithm and propose an alternative method for sampling from the Gibbs posterior, based on the Metropolis algorithm. The numerical performance of the algorithms is examined and compared via simulated data. We find that the Metropolis algorithm produces good classification results at an improved speed of computation.  相似文献   

9.
This article aims to provide a method for approximately predetermining convergence properties of the Gibbs sampler. This is to be done by first finding an approximate rate of convergence for a normal approximation of the target distribution. The rates of convergence for different implementation strategies of the Gibbs sampler are compared to find the best one. In general, the limiting convergence properties of the Gibbs sampler on a sequence of target distributions (approaching a limit) are not the same as the convergence properties of the Gibbs sampler on the limiting target distribution. Theoretical results are given in this article to justify that under conditions, the convergence properties of the Gibbs sampler can be approximated as well. A number of practical examples are given for illustration.  相似文献   

10.
The partially collapsed Gibbs (PCG) sampler offers a new strategy for improving the convergence of a Gibbs sampler. PCG achieves faster convergence by reducing the conditioning in some of the draws of its parent Gibbs sampler. Although this can significantly improve convergence, care must be taken to ensure that the stationary distribution is preserved. The conditional distributions sampled in a PCG sampler may be incompatible and permuting their order may upset the stationary distribution of the chain. Extra care must be taken when Metropolis-Hastings (MH) updates are used in some or all of the updates. Reducing the conditioning in an MH within Gibbs sampler can change the stationary distribution, even when the PCG sampler would work perfectly if MH were not used. In fact, a number of samplers of this sort that have been advocated in the literature do not actually have the target stationary distributions. In this article, we illustrate the challenges that may arise when using MH within a PCG sampler and develop a general strategy for using such updates while maintaining the desired stationary distribution. Theoretical arguments provide guidance when choosing between different MH within PCG sampling schemes. Finally, we illustrate the MH within PCG sampler and its computational advantage using several examples from our applied work.  相似文献   

11.
We consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors. A drift condition given in Meyn and Tweedie (1993, Chapter 15) is used to show that these Markov chains are geometrically ergodic. Showing that a Gibbs sampler is geometrically ergodic is the first step toward establishing central limit theorems, which can be used to approximate the error associated with Monte Carlo estimates of posterior quantities of interest. Thus, our results will be of practical interest to researchers using these Gibbs samplers for Bayesian data analysis.  相似文献   

12.
Poisson change-point models have been widely used for modelling inhomogeneous time-series of count data. There are a number of methods available for estimating the parameters in these models using iterative techniques such as MCMC. Many of these techniques share the common problem that there does not seem to be a definitive way of knowing the number of iterations required to obtain sufficient convergence. In this paper, we show that the Gibbs sampler of the Poisson change-point model is geometrically ergodic. Establishing geometric ergodicity is crucial from a practical point of view as it implies the existence of a Markov chain central limit theorem, which can be used to obtain standard error estimates. We prove that the transition kernel is a trace-class operator, which implies geometric ergodicity of the sampler. We then provide a useful application of the sampler to a model for the quarterly driver fatality counts for the state of Victoria, Australia.  相似文献   

13.
The problem of clustering a group of observations according to some objective function (e.g., K-means clustering, variable selection) or a density (e.g., posterior from a Dirichlet process mixture model prior) can be cast in the framework of Monte Carlo sampling for cluster indicators. We propose a new method called the evolutionary Monte Carlo clustering (EMCC) algorithm, in which three new “crossover moves,” based on swapping and reshuffling sub cluster intersections, are proposed. We apply the EMCC algorithm to several clustering problems including Bernoulli clustering, biological sequence motif clustering, BIC based variable selection, and mixture of normals clustering. We compare EMCC's performance both as a sampler and as a stochastic optimizer with Gibbs sampling, “split-merge” Metropolis–Hastings algorithms, K-means clustering, and the MCLUST algorithm.  相似文献   

14.
Summary  The Gibbs sampler, being a popular routine amongst Markov chain Monte Carlo sampling methodologies, has revolutionized the application of Monte Carlo methods in statistical computing practice. The performance of the Gibbs sampler relies heavily on the choice of sweep strategy, that is, the means by which the components or blocks of the random vector X of interest are visited and updated. We develop an automated, adaptive algorithm for implementing the optimal sweep strategy as the Gibbs sampler traverses the sample space. The decision rules through which this strategy is chosen are based on convergence properties of the induced chain and precision of statistical inferences drawn from the generated Monte Carlo samples. As part of the development, we analytically derive closed form expressions for the decision criteria of interest and present computationally feasible implementations of the adaptive random scan Gibbs sampler via a Gaussian approximation to the target distribution. We illustrate the results and algorithms presented by using the adaptive random scan Gibbs sampler developed to sample multivariate Gaussian target distributions, and screening test and image data. Research by RL and ZY supported in part by a US National Science Foundation FRG grant 0139948 and a grant from Lawrence Livermore National Laboratory, Livermore, California, USA.  相似文献   

15.
Abstract

The members of a set of conditional probability density functions are called compatible if there exists a joint probability density function that generates them. We generalize this concept by calling the conditionals functionally compatible if there exists a non-negative function that behaves like a joint density as far as generating the conditionals according to the probability calculus, but whose integral over the whole space is not necessarily finite. A necessary and sufficient condition for functional compatibility is given that provides a method of calculating this function, if it exists. A Markov transition function is then constructed using a set of functionally compatible conditional densities and it is shown, using the compatibility results, that the associated Markov chain is positive recurrent if and only if the conditionals are compatible. A Gibbs Markov chain, constructed via “Gibbs conditionals” from a hierarchical model with an improper posterior, is a special case. Therefore, the results of this article can be used to evaluate the consequences of applying the Gibbs sampler when the posterior's impropriety is unknown to the user. Our results cannot, however, be used to detect improper posteriors. Monte Carlo approximations based on Gibbs chains are shown to have undesirable limiting behavior when the posterior is improper. The results are applied to a Bayesian hierarchical one-way random effects model with an improper posterior distribution. The model is simple, but also quite similar to some models with improper posteriors that have been used in conjunction with the Gibbs sampler in the literature.  相似文献   

16.
We develop particle Gibbs samplers for static-parameter estimation in discretely observed piecewise deterministic process (PDPs). PDPs are stochastic processes that jump randomly at a countable number of stopping times but otherwise evolve deterministically in continuous time. A sequential Monte Carlo (SMC) sampler for filtering in PDPs has recently been proposed. We first provide new insight into the consequences of an approximation inherent within that algorithm. We then derive a new representation of the algorithm. It simplifies ensuring that the importance weights exist and also allows the use of variance-reduction techniques known as backward and ancestor sampling. Finally, we propose a novel Gibbs step that improves mixing in particle Gibbs samplers whose SMC algorithms make use of large collections of auxiliary variables, such as many instances of SMC samplers. We provide a comparison between the two particle Gibbs samplers for PDPs developed in this paper. Simulation results indicate that they can outperform reversible-jump MCMC approaches.  相似文献   

17.
We consider the symmetric scan Gibbs sampler, and give some explicit estimates of convergence rates on the Wasserstein distance for this Markov chain Monte Carlo under the Dobrushin uniqueness condition.  相似文献   

18.
Metropolis algorithms along with Gibbs steps are proposed to perform a Bayesian analysis for the Block and Basu (ACBVE) bivariate exponential distribution. We also consider the use of Gibbs sampling to develop Bayesian inference for accelerated life tests assuming a power rule model and the ACBVE distribution. The methodology developed in this paper is exemplified with two examples.  相似文献   

19.
This article provides a new theory for the analysis of the particle Gibbs (PG) sampler (Andrieu et al., 2010). Following the work of Del Moral and Jasra (2017) we provide some analysis of the particle Gibbs sampler, giving first order expansions of the kernel and minorization estimates. In addition, first order propagation of chaos estimates are derived for empirical measures of the dual particle model with a frozen path, also known as the conditional sequential Monte Carlo (SMC) update of the PG sampler. Backward and forward PG samplers are discussed, including a first comparison of the contraction estimates obtained by first order estimates. We illustrate our results with an example of fixed parameter estimation arising in hidden Markov models.  相似文献   

20.
Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号