首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Summary  The paper is concerned with the exact simulation of an unobserved true point process conditional on a noisy observation. We use dominated coupling from the past (CFTP) on an augmented state space to produce perfect samples of the target marked point process. An optimized coupling of the target chains makes the algorithm considerable faster than with the standard coupling used in dominated CFTP for point processes. The perfect simulations are used for inference and the results are compared to an ordinary Metropolis-Hastings sampler.  相似文献   

2.
In this paper, we introduce a new method for perfect simulation of multivariate densities. We use One-Shot CFTP (G. Roberts and J. Rosenthal, “One-shot coupling for certain stochastic recursive sequences,” Stochastic Processes and their Applications vol. 99 pp. 195–208, 2002) together with a monotone coupler for the Gibbs sampler, and implement the algorithm within the Read-Once CFTP protocol (D. B. Wilson, “How to couple form the past using a read-once source of randomness,” Random Structures and Algorithms vol. 16(1) pp. 85–113, 2000b). We illustrate our method by simulating efficiently from high-dimensional truncated normal distributions using the Gibbs sampler. AMS 2000 Subject Classification Primary 65C40; Secondary 65C60  相似文献   

3.
Monte Carlo algorithms typically need to generate random variates from a probability distribution described by an unnormalized density or probability mass function. Perfect simulation algorithms generate random variates exactly from these distributions, but have a running time T that is itself an unbounded random variable. This article shows that commonly used protocols for creating perfect simulation algorithms, such as Coupling From the Past can be used in such a fashion that the running time is unlikely to be very much larger than the expected running time. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

4.
This paper proposes a polynomial time perfect (exact) sampling algorithm for 2 × n contingency tables. The algorithm is based on monotone coupling from the past (monotone CFTP) algorithm. The expected running time is bounded by O(n3 lnN) where n is the number of columns and N is the total sum of all entries. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2006  相似文献   

5.
We consider infinite order chains whose transition probabilities depend on a finite suffix of the past. These suffixes are of variable length and the set of the lengths of all suffix is unbounded. We assume that the probability transitions for each of these suffixes are continuous with exponential decay rate. For these chains, we prove the weak consistency of a modification of Rissanen's algorithm Context which estimates the length of the suffix needed to predict the next symbol, given a finite sample. This generalizes to the unbounded case the original result proved for variable length Markov chains in the seminal paper Rissanen (1983). Our basic tool is the canonical Markov approximation which enables to approximate the chain of infinite order by a sequence of variable length Markov chains of increasing order. Our proof is constructive and we present an explicit decreasing upper bound for the probability of wrong estimation of the length of the current suffix.  相似文献   

6.
In this paper, we study the two-sided taboo limit processes that arise when a Markov chain or process is conditioned on staying in some set A for a long period of time. The taboo limit is time-homogeneous after time 0 and time-inhomogeneous before time 0. The time-reversed limit has this same qualitative structure. The precise transition structure at the taboo limit is identified in the context of discrete- and continuous-time Markov chains, as well as diffusions. In addition, we present a perfect simulation algorithm for generating exact samples from the quasi-stationary distribution of a finite-state Markov chain.  相似文献   

7.
Generation of deviates from random graph models with nontrivial edge dependence is an increasingly important problem. Here, we introduce a method which allows perfect sampling from random graph models in exponential family form (“exponential family random graph” models), using a variant of Coupling From The Past. We illustrate the use of the method via an application to the Markov graphs, a family that has been the subject of considerable research. We also show how the method can be applied to a variant of the biased net models, which are not exponentially parameterized.  相似文献   

8.
9.
This primer provides a self-contained exposition of the case where spatial birth-and-death processes are used for perfect simulation of locally stable point processes. Particularly, a simple dominating coupling from the past (CFTP) algorithm and the CFTP algorithms introduced in [13], [14], and [5] are studied. Some empirical results for the algorithms are discussed. Received: 30 June 2002  相似文献   

10.
A discrete time Markov chain assumes that the population is homogeneous, each individual in the population evolves according to the same transition matrix. In contrast, a discrete mover‐stayer (MS) model postulates a simple form of population heterogeneity; in each initial state, there is a proportion of individuals who never leave this state (stayers) and the complementary proportion of individuals who evolve according to a Markov chain (movers). The MS model was extended by specifying the stayer's probability to be a logistic function of an individual's covariates but leaving the same transition matrix for all movers. We further extend the MS model by allowing each mover to have her/his covariates dependent transition matrix. The model for a mover's transition matrix is related to the extant Markov chains mixture model with mixing on the speed of movement of Markov chains. The proposed model is estimated using the expectation‐maximization algorithm and illustrated with a large data set on car loans and the simulation.  相似文献   

11.
In this paper we generalize the technique presented by Häggström and Steif (Comb. Probab. Comput. 9:425–439, 2000) for the exact simulation of finite sections of infinite-volume Gibbs random fields, to a more general class of discrete time nearest neighbour spin systems. The main role is played by an auxiliary binary field, which indicates the sampling region. Percolation bounds can be used to prove that the algorithm terminates a.s. In the simplest case this field is Bernoulli; however blocking techniques can be used that destroy the independence property but extend the validity of the algorithm. Finally, the connection with stationary unilateral fields in the plane considered by Pickard (Adv. Appl. Probab. 12:655–671, 1980) and Galbraith and Walley (J. Appl. Probab. 19:332–343, 1982) is discussed.  相似文献   

12.
In this paper, we propose a fully polynomial-time randomized approximation scheme (FPRAS) for a closed Jackson network. Our algorithm is based on the Markov chain Monte Carlo (MCMC) method. Thus our scheme returns an approximate solution, for which the size of the error satisfies a given bound. To our knowledge, this algorithm is the first polynomial time MCMC algorithm for closed Jackson networks with multiple servers. We propose two new ergodic Markov chains, both of which have a unique stationary distribution that is the product form solution for closed Jackson networks. One of them is for an approximate sampler, and we show that it mixes rapidly. The other is for a perfect sampler based on the monotone coupling from the past (CFTP) algorithm proposed by Propp and Wilson, and we show that it has a monotone update function.  相似文献   

13.
We address the following question: is the causal coupling method as strong as the conductance method in showing rapid mixing of Markov chains? A causal coupling is a coupling which uses only past and present information, but not information about the future. We answer the above question in the negative by showing that there exists a bipartite graph G such that any causal coupling argument on the Jerrum–Sinclair Markov chain for sampling almost uniformly from the set of perfect and near perfect matchings of G must necessarily take time exponential in the number of vertices in G. In contrast, the above Markov chain on G has been shown to mix in polynomial time using conductance arguments. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 18: 1–17, 2001  相似文献   

14.
Statistical estimates of the solutions of boundary value problems for parabolic equations with constant coefficients are constructed on paths of random walks. The phase space of these walks is a region in which the problem is solved or the boundary of the region. The simulation of the walks employs the explicit form of the fundamental solution; therefore, these algorithms cannot be directly applied to equations with variable coefficients. In the present work, unbiased and low-bias estimates of the solution of the boundary value problem for the heat equation with a variable coefficient multiplying the unknown function are constructed on the paths of a Markov chain of random walk on balloids. For studying the properties of the Markov chains and properties of the statistical estimates, the author extends von Neumann-Ulam scheme, known in the theory of Monte Carlo methods, to equations with a substochastic kernel. The algorithm is based on a new integral representation of the solution to the boundary value problem.  相似文献   

15.
We present a new technique for constructing and analyzing couplings to bound the convergence rate of finite Markov chains. Our main theorem is a generalization of the path coupling theorem of Bubley and Dyer, allowing the defining partial couplings to have length determined by a random stopping time. Unlike the original path coupling theorem, our version can produce multistep (non‐Markovian) couplings. Using our variable length path coupling theorem, we improve the upper bound on the mixing time of the Glauber dynamics for randomly sampling colorings. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2007  相似文献   

16.
A methodology is proposed that is suitable for efficient simulation of continuous-time Markov chains that are nearly-completely decomposable. For such Markov chains the effort to adequately explore the state space via Crude Monte Carlo (CMC) simulation can be extremely large. The purpose of this paper is to provide a fast alternative to the standard CMC algorithm, which we call Aggregate Monte Carlo (AMC). The idea of the AMC algorithm is to reduce the jumping back and forth of the Markov chain in small subregions of the state space. We accomplish this by aggregating such problem regions into single states. We discuss two methods to identify collections of states where the Markov chain may become ‘trapped’: the stochastic watershed segmentation from image analysis, and a graph-theoretic decomposition method. As a motivating application, we consider the problem of estimating the charge carrier mobility of disordered organic semiconductors, which contain low-energy regions in which the charge carrier can quickly become stuck. It is shown that the AMC estimator for the charge carrier mobility reduces computational costs by several orders of magnitude compared to the CMC estimator.  相似文献   

17.
Evaluation for the performance of learning algorithm has been the main thread of theoretical research of machine learning. The performance of the regularized regression algorithm based on independent and identically distributed(i.i.d.) samples has been researched by a large number of references. In the present paper we provide the convergence rates for the performance of regularized regression based on the inputs of p-order Markov chains.  相似文献   

18.
Markov chain Monte Carlo (MCMC) methods have been used in many fields (physics, chemistry, biology, and computer science) for simulation, inference, and optimization. In many applications, Markov chains are simulated for sampling from target probabilities π(X) defined on graphs G. The graph vertices represent elements of the system, the edges represent spatial relationships, while X is a vector of variables on the vertices which often take discrete values called labels or colors. Designing efficient Markov chains is a challenging task when the variables are strongly coupled. Because of this, methods such as the single-site Gibbs sampler often experience suboptimal performance. A well-celebrated algorithm, the Swendsen–Wang (SW) method, can address the coupling problem. It clusters the vertices as connected components after turning off some edges probabilistically, and changes the color of one cluster as a whole. It is known to mix rapidly under certain conditions. Unfortunately, the SW method has limited applicability and slows down in the presence of “external fields;” for example, likelihoods in Bayesian inference. In this article, we present a general cluster algorithm that extends the SW algorithm to general Bayesian inference on graphs. We focus on image analysis problems where the graph sizes are in the order of 103–106 with small connectivity. The edge probabilities for clustering are computed using discriminative probabilities from data. We design versions of the algorithm to work on multi grid and multilevel graphs, and present applications to two typical problems in image analysis, namely image segmentation and motion analysis. In our experiments, the algorithm is at least two orders of magnitude faster (in CPU time) than the single-site Gibbs sampler.  相似文献   

19.
In a previous paper by the second author, two Markov chain Monte Carlo perfect sampling algorithms—one called coupling from the past (CFTP) and the other (FMMR) based on rejection sampling—are compared using as a case study the move‐to‐front (MTF) self‐organizing list chain. Here we revisit that case study and, in particular, exploit the dependence of FMMR on the user‐chosen initial state. We give a stochastic monotonicity result for the running time of FMMR applied to MTF and thus identify the initial state that gives the stochastically smallest running time; by contrast, the initial state used in the previous study gives the stochastically largest running time. By changing from worst choice to best choice of initial state we achieve remarkable speedup of FMMR for MTF; for example, we reduce the running time (as measured in Markov chain steps) from exponential in the length n of the list nearly down to n when the items in the list are requested according to a geometric distribution. For this same example, the running time for CFTP grows exponentially in n. © 2003 Wiley Periodicals, Inc. Random Struct. Alg., 2003  相似文献   

20.
We give a policy iteration algorithm to solve zero-sum stochastic games with finite state and action spaces and perfect information, when the value is defined in terms of the mean payoff per turn. This algorithm does not require any irreducibility assumption on the Markov chains determined by the strategies of the players. It is based on a discrete nonlinear analogue of the notion of reduction of a super-harmonic function. To cite this article: J. Cochet-Terrasson, S. Gaubert, C. R. Acad. Sci. Paris, Ser. I 343 (2006).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号