Algorithms for improving efficiency of discrete Markov chains |
| |
Authors: | Nabanita Mukherjee George Casella Kshitij Khare |
| |
Institution: | 1.Department of Statistical Science,Duke University,Durham,USA;2.Department of Statistics,University of Florida,Gainesville,USA |
| |
Abstract: | Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun 11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|