首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The performance of Markov chain Monte Carlo (MCMC) algorithms like the Metropolis Hastings Random Walk (MHRW) is highly dependent on the choice of scaling matrix for the proposal distributions. A popular choice of scaling matrix in adaptive MCMC methods is to use the empirical covariance matrix (ECM) of previous samples. However, this choice is problematic if the dimension of the target distribution is large, since the ECM then converges slowly and is computationally expensive to use. We propose two algorithms to improve convergence and decrease computational cost of adaptive MCMC methods in cases when the precision (inverse covariance) matrix of the target density can be well-approximated by a sparse matrix. The first is an algorithm for online estimation of the Cholesky factor of a sparse precision matrix. The second estimates the sparsity structure of the precision matrix. Combining the two algorithms allows us to construct precision-based adaptive MCMC algorithms that can be used as black-box methods for densities with unknown dependency structures. We construct precision-based versions of the adaptive MHRW and the adaptive Metropolis adjusted Langevin algorithm and demonstrate the performance of the methods in two examples. Supplementary materials for this article are available online.  相似文献   

2.
In this paper, we study the asymptotic efficiency of the delayed rejection strategy. In particular, the efficiency of the delayed rejection Metropolis–Hastings algorithm is compared to that of the regular Metropolis algorithm. To allow for a fair comparison, the study is carried under optimal mixing conditions for each of these algorithms. After introducing optimal scaling results for the delayed rejection (DR) algorithm, we outline the fact that the second proposal after the first rejection is discarded, with a probability tending to 1 as the dimension of the target density increases. To overcome this drawback, a modification of the delayed rejection algorithm is proposed, in which the direction of the different proposals is fixed once for all, and the Metropolis–Hastings accept-reject mechanism is used to select a proper scaling along the search direction. It is shown that this strategy significantly outperforms the original DR and Metropolis algorithms, especially when the dimension becomes large. We include numerical studies to validate these conclusions.  相似文献   

3.
This paper investigates the behaviour of the random walk Metropolis algorithm in high-dimensional problems. Here we concentrate on the case where the components in the target density is a spatially homogeneous Gibbs distribution with finite range. The performance of the algorithm is strongly linked to the presence or absence of phase transition for the Gibbs distribution; the convergence time being approximately linear in dimension for problems where phase transition is not present. Related to this, there is an optimal way to scale the variance of the proposal distribution in order to maximise the speed of convergence of the algorithm. This turns out to involve scaling the variance of the proposal as the reciprocal of dimension (at least in the phase transition-free case). Moreover, the actual optimal scaling can be characterised in terms of the overall acceptance rate of the algorithm, the maximising value being 0.234, the value as predicted by studies on simpler classes of target density. The results are proved in the framework of a weak convergence result, which shows that the algorithm actually behaves like an infinite-dimensional diffusion process in high dimensions.  相似文献   

4.
本文研究泊松逆高斯回归模型的贝叶斯统计推断.基于应用Gibbs抽样,Metropolis-Hastings算法以及Multiple-Try Metropolis算法等MCMC统计方法计算模型未知参数和潜变量的联合贝叶斯估计,并引入两个拟合优度统计量来评价提出的泊松逆高斯回归模型的合理性.若干模拟研究与一个实证分析说明方法的可行性.  相似文献   

5.
We propose a conditional density filtering (C-DF) algorithm for efficient online Bayesian inference. C-DF adapts MCMC sampling to the online setting, sampling from approximations to conditional posterior distributions obtained by propagating surrogate conditional sufficient statistics (a function of data and parameter estimates) as new data arrive. These quantities eliminate the need to store or process the entire dataset simultaneously and offer a number of desirable features. Often, these include a reduction in memory requirements and runtime and improved mixing, along with state-of-the-art parameter inference and prediction. These improvements are demonstrated through several illustrative examples including an application to high dimensional compressed regression. In the cases where dimension of the model parameter does not grow with time, we also establish sufficient conditions under which C-DF samples converge to the target posterior distribution asymptotically as sampling proceeds and more data arrive. Supplementary materials of C-DF are available online.  相似文献   

6.
This article considers Markov chain computational methods for incorporating uncertainty about the dimension of a parameter when performing inference within a Bayesian setting. A general class of methods is proposed for performing such computations, based upon a product space representation of the problem which is similar to that of Carlin and Chib. It is shown that all of the existing algorithms for incorporation of model uncertainty into Markov chain Monte Carlo (MCMC) can be derived as special cases of this general class of methods. In particular, we show that the popular reversible jump method is obtained when a special form of Metropolis–Hastings (M–H) algorithm is applied to the product space. Furthermore, the Gibbs sampling method and the variable selection method are shown to derive straightforwardly from the general framework. We believe that these new relationships between methods, which were until now seen as diverse procedures, are an important aid to the understanding of MCMC model selection procedures and may assist in the future development of improved procedures. Our discussion also sheds some light upon the important issues of “pseudo-prior” selection in the case of the Carlin and Chib sampler and choice of proposal distribution in the case of reversible jump. Finally, we propose efficient reversible jump proposal schemes that take advantage of any analytic structure that may be present in the model. These proposal schemes are compared with a standard reversible jump scheme for the problem of model order uncertainty in autoregressive time series, demonstrating the improvements which can be achieved through careful choice of proposals.  相似文献   

7.
We describe adaptive Markov chain Monte Carlo (MCMC) methods for sampling posterior distributions arising from Bayesian variable selection problems. Point-mass mixture priors are commonly used in Bayesian variable selection problems in regression. However, for generalized linear and nonlinear models where the conditional densities cannot be obtained directly, the resulting mixture posterior may be difficult to sample using standard MCMC methods due to multimodality. We introduce an adaptive MCMC scheme that automatically tunes the parameters of a family of mixture proposal distributions during simulation. The resulting chain adapts to sample efficiently from multimodal target distributions. For variable selection problems point-mass components are included in the mixture, and the associated weights adapt to approximate marginal posterior variable inclusion probabilities, while the remaining components approximate the posterior over nonzero values. The resulting sampler transitions efficiently between models, performing parameter estimation and variable selection simultaneously. Ergodicity and convergence are guaranteed by limiting the adaptation based on recent theoretical results. The algorithm is demonstrated on a logistic regression model, a sparse kernel regression, and a random field model from statistical biophysics; in each case the adaptive algorithm dramatically outperforms traditional MH algorithms. Supplementary materials for this article are available online.  相似文献   

8.
This paper introduces a new and computationally efficient Markov chain Monte Carlo (MCMC) estimation algorithm for the Bayesian analysis of zero, one, and zero and one inflated beta regression models. The algorithm is computationally efficient in the sense that it has low MCMC autocorrelations and computational time. A simulation study shows that the proposed algorithm outperforms the slice sampling and random walk Metropolis–Hastings algorithms in both small and large sample settings. An empirical illustration on a loss given default banking model demonstrates the usefulness of the proposed algorithm.  相似文献   

9.
Abstract

This article introduces a general method for Bayesian computing in richly parameterized models, structured Markov chain Monte Carlo (SMCMC), that is based on a blocked hybrid of the Gibbs sampling and Metropolis—Hastings algorithms. SMCMC speeds algorithm convergence by using the structure that is present in the problem to suggest an appropriate Metropolis—Hastings candidate distribution. Although the approach is easiest to describe for hierarchical normal linear models, we show that its extension to both nonnormal and nonlinear cases is straightforward. After describing the method in detail we compare its performance (in terms of run time and autocorrelation in the samples) to other existing methods, including the single-site updating Gibbs sampler available in the popular BUGS software package. Our results suggest significant improvements in convergence for many problems using SMCMC, as well as broad applicability of the method, including previously intractable hierarchical nonlinear model settings.  相似文献   

10.
We consider the problem of optimal scaling of the proposal variance for multidimensional random walk Metropolis algorithms. It is well known, for a wide range of continuous target densities, that the optimal scaling of the proposal variance leads to an average acceptance rate of 0.234. Therefore a natural question is, do similar results hold for target densities which have discontinuities? In the current work, we answer in the affirmative for a class of spherically constrained target densities. Even though the acceptance probability is more complicated than for continuous target densities, the optimal scaling of the proposal variance again leads to an average acceptance rate of 0.234.  相似文献   

11.
Sampling from complex distributions is an important but challenging topic in scientific and statistical computation. We synthesize three ideas, tempering, resampling, and Markov moving, and propose a general framework of resampling Markov chain Monte Carlo (MCMC). This framework not only accommodates various existing algorithms, including resample-move, importance resampling MCMC, and equi-energy sampling, but also leads to a generalized resample-move algorithm. We provide some basic analysis of these algorithms within the general framework, and present three simulation studies to compare these algorithms together with parallel tempering in the difficult situation where new modes emerge in the tails of previous tempering distributions. Our analysis and empirical results suggest that generalized resample-move tends to perform the best among all the algorithms studied when the Markov kernels lead to fast mixing or even locally so toward restricted distributions, whereas parallel tempering tends to perform the best when the Markov kernels lead to slow mixing, without even converging fast to restricted distributions. Moreover, importance resampling MCMC and equi-energy sampling perform similarly to each other, often worse than independence Metropolis resampling MCMC. Therefore, different algorithms seem to have advantages in different settings.  相似文献   

12.
The need to calibrate increasingly complex statistical models requires a persistent effort for further advances on available, computationally intensive Monte-Carlo methods. We study here an advanced version of familiar Markov-chain Monte-Carlo (MCMC) algorithms that sample from target distributions defined as change of measures from Gaussian laws on general Hilbert spaces. Such a model structure arises in several contexts: we focus here at the important class of statistical models driven by diffusion paths whence the Wiener process constitutes the reference Gaussian law. Particular emphasis is given on advanced Hybrid Monte-Carlo (HMC) which makes large, derivative-driven steps in the state space (in contrast with local-move Random-walk-type algorithms) with analytical and experimental results. We illustrate its computational advantages in various diffusion processes and observation regimes; examples include stochastic volatility and latent survival models. In contrast with their standard MCMC counterparts, the advanced versions have mesh-free mixing times, as these will not deteriorate upon refinement of the approximation of the inherently infinite-dimensional diffusion paths by finite-dimensional ones used in practice when applying the algorithms on a computer.  相似文献   

13.
In Bayesian analysis of multidimensional scaling model with MCMC algorithm, we encounter the indeterminacy of rotation, reflection and translation of the parameter matrix of interest. This type of indeterminacy may be seen in other multivariate latent variable models as well. In this paper, we propose to address this indeterminacy problem with a novel, offline post-processing method that is easily implemented using easy-to-use Markov chain Monte Carlo (MCMC) software. Specifically, we propose a post-processing method based on the generalized extended Procrustes analysis to address this problem. The proposed method is compared with four existing methods to deal with indeterminacy thorough analyses of artificial as well as real datasets. The proposed method achieved at least as good a performance as the best existing method. The benefit of the offline processing approach in the era of easy-to-use MCMC software is discussed.  相似文献   

14.
We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario, Saksman, and Tamminen (2001), Metropolis-within-Gibbs algorithms for nonconjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algorithms perform very well compared to nonadaptive algorithms, even in high dimension.  相似文献   

15.
The asymptotic optimal scaling of random walk Metropolis (RWM) algorithms with Gaussian proposal distributions is well understood for certain specific classes of target distributions. These asymptotic results easily extend to any light tailed proposal distribution with finite fourth moment. However, heavy tailed proposal distributions such as the Cauchy distribution are known to have a number of desirable properties, and in many situations are preferable to light tailed proposal distributions. Therefore we consider the question of scaling for Cauchy distributed proposals for a wide range of independent and identically distributed (iid) product densities. The results are somewhat surprising as to when and when not Cauchy distributed proposals are preferable to Gaussian proposal distributions. This provides motivation for finding proposal distributions which improve on both Gaussian and Cauchy proposals, both for finite dimensional target distributions and asymptotically as the dimension of the target density, d → ∞. Throughout we seek the scaling of the proposal distribution which maximizes the expected squared jumping distance (ESJD).  相似文献   

16.
In the following article, we investigate a particle filter for approximating Feynman–Kac models with indicator potentials and we use this algorithm within Markov chain Monte Carlo (MCMC) to learn static parameters of the model. Examples of such models include approximate Bayesian computation (ABC) posteriors associated with hidden Markov models (HMMs) or rare-event problems. Such models require the use of advanced particle filter or MCMC algorithms to perform estimation. One of the drawbacks of existing particle filters is that they may “collapse,” in that the algorithm may terminate early, due to the indicator potentials. In this article, using a newly developed special case of the locally adaptive particle filter, we use an algorithm that can deal with this latter problem, while introducing a random cost per-time step. In particular, we show how this algorithm can be used within MCMC, using particle MCMC. It is established that, when not taking into account computational time, when the new MCMC algorithm is applied to a simplified model it has a lower asymptotic variance in comparison to a standard particle MCMC algorithm. Numerical examples are presented for ABC approximations of HMMs.  相似文献   

17.
We propose a novel approach to modeling advertising dynamics for a firm operating over a distributed market domain based on controlled partial differential equations of the diffusion type. Using our model, we consider a general type of finite-horizon profit maximization problem in a monopoly setting. By reformulating this profit maximization problem as an optimal control problem in infinite dimensions, we derive sufficient conditions for the existence of its optimal solutions under general profit functions, as well as state and control constraints, and provide a general characterization of the optimal solutions. Sharper, feedback-form characterizations of the optimal solutions are obtained for two variants of the general problem. The first author gratefully acknowledges financial support by the NSF, the DAAD, the SFB 611 (Bonn), and the Max-Planck-Institut für Mathematik (Leipzig) through an IPDE fellowship.  相似文献   

18.
Analyses of multivariate ordinal probit models typically use data augmentation to link the observed (discrete) data to latent (continuous) data via a censoring mechanism defined by a collection of “cutpoints.” Most standard models, for which effective Markov chain Monte Carlo (MCMC) sampling algorithms have been developed, use a separate (and independent) set of cutpoints for each element of the multivariate response. Motivated by the analysis of ratings data, we describe a particular class of multivariate ordinal probit models where it is desirable to use a common set of cutpoints. While this approach is attractive from a data-analytic perspective, we show that the existing efficient MCMC algorithms can no longer be accurately applied. Moreover, we show that attempts to implement these algorithms by numerically approximating required multivariate normal integrals over high-dimensional rectangular regions can result in severely degraded estimates of the posterior distribution. We propose a new data augmentation that is based on a covariance decomposition and that admits a simple and accurate MCMC algorithm. Our data augmentation requires only that univariate normal integrals be evaluated, which can be done quickly and with high accuracy. We provide theoretical results that suggest optimal decompositions within this class of data augmentations, and, based on the theory, recommend default decompositions that we demonstrate work well in practice. This article has supplementary material online.  相似文献   

19.
The gamma distribution arises frequently in Bayesian models, but there is not an easy-to-use conjugate prior for the shape parameter of a gamma. This inconvenience is usually dealt with by using either Metropolis–Hastings moves, rejection sampling methods, or numerical integration. However, in models with a large number of shape parameters, these existing methods are slower or more complicated than one would like, making them burdensome in practice. It turns out that the full conditional distribution of the gamma shape parameter is well approximated by a gamma distribution, even for small sample sizes, when the prior on the shape parameter is also a gamma distribution. This article introduces a quick and easy algorithm for finding a gamma distribution that approximates the full conditional distribution of the shape parameter. We empirically demonstrate the speed and accuracy of the approximation across a wide range of conditions. If exactness is required, the approximation can be used as a proposal distribution for Metropolis–Hastings. Supplementary material for this article is available online.  相似文献   

20.
This paper shows how the theory of Dirichlet forms can be used to deliver proofs of optimal scaling results for Markov chain Monte Carlo algorithms (specifically, Metropolis–Hastings random walk samplers) under regularity conditions which are substantially weaker than those required by the original approach (based on the use of infinitesimal generators). The Dirichlet form methods have the added advantage of providing an explicit construction of the underlying infinite-dimensional context. In particular, this enables us directly to establish weak convergence to the relevant infinite-dimensional distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号