首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 109 毫秒
1.
The Hybrid Monte Carlo (HMC) algorithm provides a framework for sampling from complex, high-dimensional target distributions. In contrast with standard Markov chain Monte Carlo (MCMC) algorithms, it generates nonlocal, nonsymmetric moves in the state space, alleviating random walk type behaviour for the simulated trajectories. However, similarly to algorithms based on random walk or Langevin proposals, the number of steps required to explore the target distribution typically grows with the dimension of the state space. We define a generalized HMC algorithm which overcomes this problem for target measures arising as finite-dimensional approximations of measures π which have density with respect to a Gaussian measure on an infinite-dimensional Hilbert space. The key idea is to construct an MCMC method which is well defined on the Hilbert space itself.We successively address the following issues in the infinite-dimensional setting of a Hilbert space: (i) construction of a probability measure Π in an enlarged phase space having the target π as a marginal, together with a Hamiltonian flow that preserves Π; (ii) development of a suitable geometric numerical integrator for the Hamiltonian flow; and (iii) derivation of an accept/reject rule to ensure preservation of Π when using the above numerical integrator instead of the actual Hamiltonian flow. Experiments are reported that compare the new algorithm with standard HMC and with a version of the Langevin MCMC method defined on a Hilbert space.  相似文献   

2.
The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high-dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed for this purpose.  相似文献   

3.
While mixtures of Gaussian distributions have been studied for more than a century, the construction of a reference Bayesian analysis of those models remains unsolved, with a general prohibition of improper priors due to the ill-posed nature of such statistical objects. This difficulty is usually bypassed by an empirical Bayes resolution. By creating a new parameterization centered on the mean and possibly the variance of the mixture distribution itself, we manage to develop here a weakly informative prior for a wide class of mixtures with an arbitrary number of components. We demonstrate that some posterior distributions associated with this prior and a minimal sample size are proper. We provide Markov chain Monte Carlo (MCMC) implementations that exhibit the expected exchangeability. We only study here the univariate case, the extension to multivariate location-scale mixtures being currently under study. An R package called Ultimixt is associated with this article. Supplementary material for this article is available online.  相似文献   

4.
Abstract

In this article we discuss the problem of assessing the performance of Markov chain Monte Carlo (MCMC) algorithms on the basis of simulation output. In essence, we extend the original ideas of Gelman and Rubin and, more recently, Brooks and Gelman, to problems where we are able to split the variation inherent within the MCMC simulation output into two distinct groups. We show how such a diagnostic may be useful in assessing the performance of MCMC samplers addressing model choice problems, such as the reversible jump MCMC algorithm. In the model choice context, we show how the reversible jump MCMC simulation output for parameters that retain a coherent interpretation throughout the simulation, can be used to assess convergence. By considering various decompositions of the sampling variance of this parameter, we can assess the performance of our MCMC sampler in terms of its mixing properties both within and between models and we illustrate our approach in both the graphical Gaussian models and normal mixtures context. Finally, we provide an example of the application of our diagnostic to the assessment of the influence of different starting values on MCMC simulation output, thereby illustrating the wider utility of our method beyond the Bayesian model choice and reversible jump MCMC context.  相似文献   

5.
Mixtures of linear mixed models (MLMMs) are useful for clustering grouped data and can be estimated by likelihood maximization through the Expectation–Maximization algorithm. A suitable number of components is then determined conventionally by comparing different mixture models using penalized log-likelihood criteria such as Bayesian information criterion. We propose fitting MLMMs with variational methods, which can perform parameter estimation and model selection simultaneously. We describe a variational approximation for MLMMs where the variational lower bound is in closed form, allowing for fast evaluation and develop a novel variational greedy algorithm for model selection and learning of the mixture components. This approach handles algorithm initialization and returns a plausible number of mixture components automatically. In cases of weak identifiability of certain model parameters, we use hierarchical centering to reparameterize the model and show empirically that there is a gain in efficiency in variational algorithms similar to that in Markov chain Monte Carlo (MCMC) algorithms. Related to this, we prove that the approximate rate of convergence of variational algorithms by Gaussian approximation is equal to that of the corresponding Gibbs sampler, which suggests that reparameterizations can lead to improved convergence in variational algorithms just as in MCMC algorithms. Supplementary materials for the article are available online.  相似文献   

6.
Gaussian graphical models (GGMs) are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of GGMs extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous subgroups. In this article, we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable GGMs. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo (MCMC) algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the MCMC algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which MCMC algorithms are too slow to be practically useful.  相似文献   

7.
This article discusses design ideas useful in the development of Markov chain Monte Carlo (MCMC) software. Goals of the design are to facilitate analysis of as many statistical models as possible, and to enable users to experiment with different MCMC algorithms as a research tool. These ideas have been used in YADAS, a system written in the Java language, but are also applicable in other object-oriented languages.  相似文献   

8.
We consider Bayesian analysis of data from multivariate linear regression models whose errors have a distribution that is a scale mixture of normals. Such models are used to analyze data on financial returns, which are notoriously heavy-tailed. Let π denote the intractable posterior density that results when this regression model is combined with the standard non-informative prior on the unknown regression coefficients and scale matrix of the errors. Roughly speaking, the posterior is proper if and only if nd+k, where n is the sample size, d is the dimension of the response, and k is number of covariates. We provide a method of making exact draws from π in the special case where n=d+k, and we study Markov chain Monte Carlo (MCMC) algorithms that can be used to explore π when n>d+k. In particular, we show how the Haar PX-DA technology studied in Hobert and Marchev (2008) [11] can be used to improve upon Liu’s (1996) [7] data augmentation (DA) algorithm. Indeed, the new algorithm that we introduce is theoretically superior to the DA algorithm, yet equivalent to DA in terms of computational complexity. Moreover, we analyze the convergence rates of these MCMC algorithms in the important special case where the regression errors have a Student’s t distribution. We prove that, under conditions on n, d, k, and the degrees of freedom of the t distribution, both algorithms converge at a geometric rate. These convergence rate results are important from a practical standpoint because geometric ergodicity guarantees the existence of central limit theorems which are essential for the calculation of valid asymptotic standard errors for MCMC based estimates.  相似文献   

9.
In the following article, we investigate a particle filter for approximating Feynman–Kac models with indicator potentials and we use this algorithm within Markov chain Monte Carlo (MCMC) to learn static parameters of the model. Examples of such models include approximate Bayesian computation (ABC) posteriors associated with hidden Markov models (HMMs) or rare-event problems. Such models require the use of advanced particle filter or MCMC algorithms to perform estimation. One of the drawbacks of existing particle filters is that they may “collapse,” in that the algorithm may terminate early, due to the indicator potentials. In this article, using a newly developed special case of the locally adaptive particle filter, we use an algorithm that can deal with this latter problem, while introducing a random cost per-time step. In particular, we show how this algorithm can be used within MCMC, using particle MCMC. It is established that, when not taking into account computational time, when the new MCMC algorithm is applied to a simplified model it has a lower asymptotic variance in comparison to a standard particle MCMC algorithm. Numerical examples are presented for ABC approximations of HMMs.  相似文献   

10.
This paper is concerned with parameter estimation in linear and non-linear Itô type stochastic differential equations using Markov chain Monte Carlo (MCMC) methods. The MCMC methods studied in this paper are the Metropolis–Hastings and Hamiltonian Monte Carlo (HMC) algorithms. In these kind of models, the computation of the energy function gradient needed by HMC and gradient based optimization methods is non-trivial, and here we show how the gradient can be computed with a linear or non-linear Kalman filter-like recursion. We shall also show how in the linear case the differential equations in the gradient recursion equations can be solved using the matrix fraction decomposition. Numerical results for simulated examples are presented and discussed in detail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号