首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
In a previous paper by the second author, two Markov chain Monte Carlo perfect sampling algorithms—one called coupling from the past (CFTP) and the other (FMMR) based on rejection sampling—are compared using as a case study the move‐to‐front (MTF) self‐organizing list chain. Here we revisit that case study and, in particular, exploit the dependence of FMMR on the user‐chosen initial state. We give a stochastic monotonicity result for the running time of FMMR applied to MTF and thus identify the initial state that gives the stochastically smallest running time; by contrast, the initial state used in the previous study gives the stochastically largest running time. By changing from worst choice to best choice of initial state we achieve remarkable speedup of FMMR for MTF; for example, we reduce the running time (as measured in Markov chain steps) from exponential in the length n of the list nearly down to n when the items in the list are requested according to a geometric distribution. For this same example, the running time for CFTP grows exponentially in n. © 2003 Wiley Periodicals, Inc. Random Struct. Alg., 2003  相似文献   

2.
In this paper, we propose a fully polynomial-time randomized approximation scheme (FPRAS) for a closed Jackson network. Our algorithm is based on the Markov chain Monte Carlo (MCMC) method. Thus our scheme returns an approximate solution, for which the size of the error satisfies a given bound. To our knowledge, this algorithm is the first polynomial time MCMC algorithm for closed Jackson networks with multiple servers. We propose two new ergodic Markov chains, both of which have a unique stationary distribution that is the product form solution for closed Jackson networks. One of them is for an approximate sampler, and we show that it mixes rapidly. The other is for a perfect sampler based on the monotone coupling from the past (CFTP) algorithm proposed by Propp and Wilson, and we show that it has a monotone update function.  相似文献   

3.
In 1996, Propp and Wilson came up with a remarkably clever method for generating exact samples from the stationary distribution of a Markov chain [J.G. Propp, D.B. Wilson, Exact sampling with coupled Markov chains and applications to statistical mechanics, Random Structures and Algorithms 9 (1–2) (1996) 223–252]. Their method, called “perfect sampling” or “exact sampling” avoids the inherent bias of samples that are generated by running the chain for a large but fixed number of steps. It does so by using a strategy called “coupling from the past”. Although the sampling mechanism used in their method is typically driven by independent random points, more structured sampling can also be used. Recently, Craiu and Meng [R.V. Craiu, X.-L. Meng, Antithetic coupling for perfect sampling, in: E.I. George (Ed.), Bayesian Methods with Applications to Science, Policy, and Official Statistics (Selected Papers from ISBA 2000), 2000, pp. 99–108; R.V. Craiu, X.-L. Meng, Multi-process parallel antithetic coupling for forward and backward Markov Chain Monte Carlo, Annals of Statistics 33 (2005) 661–697] suggested using different forms of antithetic coupling for that purpose. In this paper, we consider the use of highly uniform point sets to drive the exact sampling in Propp and Wilson’s method, and illustrate the effectiveness of the proposed method with a few numerical examples.  相似文献   

4.
Summary  The paper is concerned with the exact simulation of an unobserved true point process conditional on a noisy observation. We use dominated coupling from the past (CFTP) on an augmented state space to produce perfect samples of the target marked point process. An optimized coupling of the target chains makes the algorithm considerable faster than with the standard coupling used in dominated CFTP for point processes. The perfect simulations are used for inference and the results are compared to an ordinary Metropolis-Hastings sampler.  相似文献   

5.
This primer provides a self-contained exposition of the case where spatial birth-and-death processes are used for perfect simulation of locally stable point processes. Particularly, a simple dominating coupling from the past (CFTP) algorithm and the CFTP algorithms introduced in [13], [14], and [5] are studied. Some empirical results for the algorithms are discussed. Received: 30 June 2002  相似文献   

6.
The greatest drawback of Monte Carlo Markov chain methods is lack of knowledge of the mixing time of the chain. The use of bounding chains solves this difficulty for some chains by giving theoretical and experimental upper bounds on the mixing time. Moreover, when used with methodologies such as coupling from the past, bounding chains allow the user to take samples drawn exactly from the stationary distribution without knowledge of the mixing time. Here we present a bounding chain for the Swendsen‐Wang process. The Swendsen‐Wang bounding chain allow us to efficiently obtain exact samples from the ferromagnetic Q‐state Potts model for certain classes of graphs. Also, by analyzing this bounding chain, we will show that Swendsen‐Wang is rapidly mixing over a slightly larger range of parameters than was known previously. © 2002 Wiley Periodicals, Inc. Random Struct. Alg., 22: 43–59, 2002  相似文献   

7.
In this paper, we give a simple algorithm for sampling from the Dickman distribution. It is based on coupling from the past with a suitable dominating Markov chain.  相似文献   

8.
In this paper, we introduce a new method for perfect simulation of multivariate densities. We use One-Shot CFTP (G. Roberts and J. Rosenthal, “One-shot coupling for certain stochastic recursive sequences,” Stochastic Processes and their Applications vol. 99 pp. 195–208, 2002) together with a monotone coupler for the Gibbs sampler, and implement the algorithm within the Read-Once CFTP protocol (D. B. Wilson, “How to couple form the past using a read-once source of randomness,” Random Structures and Algorithms vol. 16(1) pp. 85–113, 2000b). We illustrate our method by simulating efficiently from high-dimensional truncated normal distributions using the Gibbs sampler. AMS 2000 Subject Classification Primary 65C40; Secondary 65C60  相似文献   

9.
In this paper, we study the problem of sampling (exactly) uniformly from the set of linear extensions of an arbitrary partial order. Previous Markov chain techniques have yielded algorithms that generate approximately uniform samples. Here, we create a bounding chain for one such Markov chain, and by using a non-Markovian coupling together with a modified form of coupling from the past, we build an algorithm for perfectly generating samples. The expected running time of the procedure is O(n3lnn), making the technique as fast as the mixing time of the Karzanov/Khachiyan chain upon which it is based.  相似文献   

10.
本文将证券价格时间序列分解成趋势变动序列和 Markov链 ,建立了证券组合的 Markov链模型 ,应用 Markov链理论对此模型进行了分析 ,给出了充分大的一个时间内的收益率 ,风险和切点组合的计算公式  相似文献   

11.
We address the following question: is the causal coupling method as strong as the conductance method in showing rapid mixing of Markov chains? A causal coupling is a coupling which uses only past and present information, but not information about the future. We answer the above question in the negative by showing that there exists a bipartite graph G such that any causal coupling argument on the Jerrum–Sinclair Markov chain for sampling almost uniformly from the set of perfect and near perfect matchings of G must necessarily take time exponential in the number of vertices in G. In contrast, the above Markov chain on G has been shown to mix in polynomial time using conductance arguments. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 18: 1–17, 2001  相似文献   

12.
本文利用构造鞅的方法, 研究了Cayley树图上奇偶马氏链场的强极限定理, 给出了Cayley树图上奇偶马氏链场关于状态和状态序偶出现频率的强大数定律, 推广了一个已知结果.  相似文献   

13.
He and Xia (1997, Stochastic Processes Appl. 68, pp. 101–111) gave some error bounds for a Wasserstein distance between the distributions of the partial sum process of a Markov chain and a Poisson point process on the positive half-line. However, all these bounds increase logarithmically with the mean of the Poisson point process. In this paper, using the coupling method and a general deep result for estimating the errors of Poisson process approximation in Brown and Xia (2001, Ann. Probab. 29, pp. 1373–1403), we give a new error bound for the above Wasserstein distance. In contrast to the previous results of He and Xia (1997), our new error bound has no logarithm anymore and it is bounded and asymptotically remains constant as the mean increases.  相似文献   

14.
Adaptive Markov Chain Monte Carlo (MCMC) algorithms attempt to ‘learn’ from the results of past iterations so the Markov chain can converge quicker. Unfortunately, adaptive MCMC algorithms are no longer Markovian, so their convergence is difficult to guarantee. In this paper, we develop new diagnostics to determine whether the adaption is still improving the convergence. We present an algorithm which automatically stops adapting once it determines further adaption will not increase the convergence speed. Our algorithm allows the computer to tune a ‘good’ Markov chain through multiple phases of adaption, and then run conventional non-adaptive MCMC. In this way, the efficiency gains of adaptive MCMC can be obtained while still ensuring convergence to the target distribution.  相似文献   

15.
Jacobson and Matthews introduced the most hopeful method known for efficiently generating uniformly distributed random Latin squares. Cameron conjectures that the same Markov chain will also generate all of the other generalized 2‐designs with block size 3 uniformly at random. For a generalization of Latin squares, we give an affirmative result for any admissible parameter values. We also give the first insight and analysis into a generalization of the 1‐factorization of the complete graph by giving an affirmative result for some admissible parameter values. © 2012 Wiley Periodicals, Inc. J. Combin. Designs 20: 368–380, 2012  相似文献   

16.
Summary. The integrated autocovariance and autocorrelation time are essential tools to understand the dynamical behavior of a Markov chain. We study here these two objects for Markov chains with rare transitions with no reversibility assumption. We give upper bounds for the autocovariance and the integrated autocorrelation time, as well as exponential equivalents at low temperature. We also link their slowest modes with the underline energy landscape under mild assumptions. Our proofs will be based on large deviation estimates coming from the theory of Wentzell and Freidlin and others [4, 3, 12], and on coupling arguments (see [6] for a review on the coupling method). Received 5 August 1996 / In revised form: 6 August 1997  相似文献   

17.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
In the paper we introduce stopping times for quantum Markov states. We study algebras and maps corresponding to stopping times, give a condition of strong Markov property and give classification of projections for the property of accessibility. Our main result is a new recurrence criterium in terms of stopping times (Theorem 1 and Corollary 2). As an application of the criterium we study how, in Section 6, the quantum Markov chain associated with the one-dimensional Heisenberg (usually non-Markovian) process, obtained from this quantum Markov chain by restriction to a diagonal subalgebra, is such that all its states are recurrent. We were not able to obtain this result from the known recurrence criteria of classical probability.Supported by GNAFA-CNR, Bando n. 211.01.25.  相似文献   

19.
给出了一种由局部刻画构造非时齐马尔科夫链的概率方法.以任意时刻之后第一次跳的时间为条件,证明了过程的马尔科夫性,同时得到了转移概率的递归表达式—这种做法使其概率意义得以明确呈现,并且进一步证明了强马尔科夫性等一系列性质.这为数值模拟,即以Monte Carlo生成非时齐Q-过程的随机轨道提供了严格的理论基础.  相似文献   

20.
This paper is concerned with the properties of the value-iteration operator0 which arises in undiscounted Markov decision problems. We give both necessary and sufficient conditions for this operator to reduce to a contraction operator, in which case it is easy to show that the value-iteration method exhibits a uniform geometric convergence rate. As necessary conditions we obtain a number of important characterizations of the chain and periodicity structures of the problem, and as sufficient conditions, we give a general “scrambling-type” recurrency condition, which encompasses a number of important special cases. Next, we show that a data transformation turns every unichained undiscounted Markov Renewal Program into an equivalent undiscounted Markov decision problem, in which the value-iteration operator is contracting, because it satisfies this “scrambling-type” condition. We exploit this contraction property in order to obtain lower and upper bounds as well as variational characterizations for the fixed point of the optimality equation and a test for eliminating suboptimal actions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号