Langevin-Type Models II: Self-Targeting Candidates for MCMC Algorithms* |
| |
Authors: | Stramer O. Tweedie R. L. |
| |
Affiliation: | (1) Department of Statistics and Actuarial Science, University of Iowa, Iowa City, IA 52242, USA;(2) Division of Biostatistics, University of Minnesota, Minneapolis, MN 55455, USA |
| |
Abstract: | The Metropolis-Hastings algorithm for estimating a distribution is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have as the invariant measure. The traditional methods use candidates essentially unconnected to . We show that the class of candidate distributions, developed in Part I (Stramer and Tweedie 1999), which self-target towards the high density areas of , produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. We illustrate this behavior for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm. The detailed results are given in one dimension but we indicate how they may extend successfully to higher dimensions. |
| |
Keywords: | Hastings algorithms Metropolis algorithms Markov chain Monte Carlo diffusions Langevin models discrete approximations posterior distributions irreducible Markov processes geometric ergodicity uniform ergodicity Gibbs sampling |
本文献已被 SpringerLink 等数据库收录! |
|