首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Monte Carlo within Metropolis (MCwM) algorithm, interpreted as a perturbed Metropolis–Hastings (MH) algorithm, provides an approach for approximate sampling when the target distribution is intractable. Assuming the unperturbed Markov chain is geometrically ergodic, we show explicit estimates of the difference between the nth step distributions of the perturbed MCwM and the unperturbed MH chains. These bounds are based on novel perturbation results for Markov chains which are of interest beyond the MCwM setting. To apply the bounds, we need to control the difference between the transition probabilities of the two chains and to verify stability of the perturbed chain.  相似文献   

2.
This paper develops exponential type upper bounds for scaled occupation measures of singularly perturbed Markov chains in discrete time. By considering two-time scale in the Markov chains, asymptotic analysis is carried out. The cases of the fast changing transition probability matrix is irreducible and that are divisible into l ergodic classes are examined first; the upper bounds of a sequence of scaled occupation measures are derived. Then extensions to Markov chains involving transient states and/or nonhomogeneous transition probabilities are dealt with. The results enable us to further our understanding of the underlying Markov chains and related dynamic systems, which is essential for solving many control and optimization problems.  相似文献   

3.
This paper is devoted to perturbation analysis of denumerable Markov chains. Bounds are provided for the deviation between the stationary distribution of the perturbed and nominal chain, where the bounds are given by the weighted supremum norm. In addition, bounds for the perturbed stationary probabilities are established. Furthermore, bounds on the norm of the asymptotic decomposition of the perturbed stationary distribution are provided, where the bounds are expressed in terms of the norm of the ergodicity coefficient, or the norm of a special residual matrix. Refinements of our bounds for Doeblin Markov chains are considered as well. Our results are illustrated with a number of examples.  相似文献   

4.
5.
We consider an accessibility index for the states of a discrete-time, ergodic, homogeneous Markov chain on a finite state space; this index is naturally associated with the random walk centrality introduced by Noh and Reiger (2004) for a random walk on a connected graph. We observe that the vector of accessibility indices provides a partition of Kemeny’s constant for the Markov chain. We provide three characterizations of this accessibility index: one in terms of the first return time to the state in question, and two in terms of the transition matrix associated with the Markov chain. Several bounds are provided on the accessibility index in terms of the eigenvalues of the transition matrix and the stationary vector, and the bounds are shown to be tight. The behaviour of the accessibility index under perturbation of the transition matrix is investigated, and examples exhibiting some counter-intuitive behaviour are presented. Finally, we characterize the situation in which the accessibility indices for all states coincide.  相似文献   

6.
A maximum out forest of a digraph is its spanning subgraph that consists of disjoint diverging trees and has the maximum possible number of arcs. For an arbitrary weighted digraph, we consider a matrix of specific weights of maximum out forests and demonstrate how this matrix can be used to get a graph-theoretic interpretation for the limiting probabilities of Markov chains. For a special (nonclassical) correspondence between Markov chains and weighted digraphs, the matrix of Cesáro limiting transition probabilities of any finite homogeneous Markov chain coincides with the normalized matrix of maximum out forests of the corresponding digraphs. This provides a finite (combinatorial) method to calculate the limiting probabilities of Markov chains and thus their stationary distributions. On the other hand, the Markov chain technique provides the proofs to some statements about digraphs.  相似文献   

7.
We investigate perturbation for continuous-time Markov chains(CTMCs) on a countable state space. Explicit bounds on ?D and D are derived in terms of a drift condition, where ? and D represent the perturbation of the intensity matrices and the deviation matrix, respectively. Moreover, we obtain perturbation bounds on the stationary distributions, which extends the results by Liu(2012) for uniformly bounded CTMCs to general(possibly unbounded) CTMCs. Our arguments are mainly based on the technique of augmented truncations.  相似文献   

8.
We justify and discuss expressions for joint lower and upper expectations in imprecise probability trees, in terms of the sub- and supermartingales that can be associated with such trees. These imprecise probability trees can be seen as discrete-time stochastic processes with finite state sets and transition probabilities that are imprecise, in the sense that they are only known to belong to some convex closed set of probability measures. We derive various properties for their joint lower and upper expectations, and in particular a law of iterated expectations. We then focus on the special case of imprecise Markov chains, investigate their Markov and stationarity properties, and use these, by way of an example, to derive a system of non-linear equations for lower and upper expected transition and return times. Most importantly, we prove a game-theoretic version of the strong law of large numbers for submartingale differences in imprecise probability trees, and use this to derive point-wise ergodic theorems for imprecise Markov chains.  相似文献   

9.
In this paper potential theory is developed for finitely additive Markov chains and this is used to obtain various characterization theorems for discrete time Markov chains with an arbitrary state space, with finitely additive stationary transition probabilities and a finitely additive initial distribution.  相似文献   

10.
Markov chains are often used as mathematical models of natural phenomena, with transition probabilities defined in terms of parameters that are of interest in the scientific question at hand. Sensitivity analysis is an important way to quantify the effects of changes in these parameters on the behavior of the chain. Many properties of Markov chains can be written as simple matrix expressions, and hence matrix calculus is a powerful approach to sensitivity analysis. Using matrix calculus, we derive the sensitivity and elasticity of a variety of properties of absorbing and ergodic finite-state chains. For absorbing chains, we present the sensitivities of the moments of the number of visits to each transient state, the moments of the time to absorption, the mean number of states visited before absorption, the quasistationary distribution, and the probabilities of absorption in each of several absorbing states. For ergodic chains, we present the sensitivity of the stationary distribution, the mean first passage time matrix, the fundamental matrix, and the Kemeny constant. We include two examples of application of the results to demographic and ecological problems.  相似文献   

11.
《随机分析与应用》2013,31(3):559-565
For the GI X /M/1 queue, it has been recently proved that there exist geometric distributions that are stochastic lower and upper bounds for the stationary distribution of the embedded Markov chain at arrival epochs. In this note we observe that this is also true for the GI X /M Y /1 queue. Moreover, we prove that the stationary distribution of its embedded Markov chain is asymptotically geometric. It is noteworthy that the asymptotic geometric parameter is the same as the geometric parameter of the upper bound. This fact justifies previous numerical findings about the quality of the bounds.  相似文献   

12.
A Markov chain (with a discrete state space and a continuous parameter) is perturbed by forcing a chain to return to “permissible” states whenever it happens to enter “forbidden” states, with returns governed by a replacement distribution.The compensation method is employed to obtain the distribution for the modified chain, in terms of the original chain and the perturbation mechanism.Emphasis is placed on ergodic chains, and interpretation of results in terms of perturbation theory of semi-groups and the ergodic potential theory (based on the fundamental matrix of a chain) is mentioned.  相似文献   

13.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

14.
By proving a local limit theorem for higher-order transitions, we determine the time required for necklace chains to be close to stationarity. Because necklace chains, built by arranging identical smaller Markov chains around a directed cycle, are not reversible, have little symmetry, do not have uniform stationary distributions, and can be nearly periodic, prior general bounds on rates of convergence of Markov chains either do not apply or give poor bounds. Necklace chains can serve as test cases for future techniques for bounding rates of convergence.  相似文献   

15.
In this paper we model the run time behavior of GAs using higher cardinality representations as Markov chains, define the states of the Markov Chain and derive the transition probabilities of the corresponding transition matrix. We analyze the behavior of this chain and obtain bounds on its convergence rate and bounds on the runtime complexity of the GA. We further investigate the effects of using binary versus higher cardinality representation of a search space.  相似文献   

16.
This paper considers the augmented truncation approximation of the generator of an ergodic continuous-time Markov chain with a countably infinite state space. The main purpose of this paper is to present bounds for the absolute difference between the stationary distributions of the original generator and its augmented truncation. As examples, we apply the bounds to an MMs retrial queue and an upper Hessenberg Markov chain.  相似文献   

17.
Starting with finite Markov chains on partitions of a natural number n we construct, via a scaling limit transition as n → ∞, a family of infinite-dimensional diffusion processes. The limit processes are ergodic; their stationary distributions, the so-called z-measures, appeared earlier in the problem of harmonic analysis for the infinite symmetric group. The generators of the processes are explicitly described.  相似文献   

18.
The ergodic theory of Markov chains in random environments   总被引:70,自引:0,他引:70  
Summary A general formulation of the stochastic model for a Markov chain in a random environment is given, including an analysis of the dependence relations between the environmental process and the controlled Markov chain, in particular the problem of feedback. Assuming stationary environments, the ergodic theory of Markov processes is applied to give conditions for the existence of finite invariant measure (equilibrium distributions) and to obtain ergodic theorems, which provide results on convergence of products of random stochastic matrices. Coupling theory is used to obtain results on direct convergence of these products and the structure of the tail -field. State properties including classification and communication properties are discussed.  相似文献   

19.
Zhao  Yiqiang Q.  Li  Wei  Braun  W. John 《Queueing Systems》1997,27(1-2):127-130
Heyman gives an interesting factorization of I-P, where P is the transition probability matrix for an ergodic Markov chain. We show that this factorization is valid if and only if the Markov chain is recurrent. Moreover, we provide a decomposition result which includes all ergodic, null recurrent as well as the transient Markov chains as special cases. Such a decomposition has been shown to be useful in the analysis of queues. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

20.
We consider stationary 0-valued Markov chains whose transition probabilities are associated with convolution structures of measures which are induced by linearization formulas of orthogonal polynomials. The best known examples are random walks on polynomial hypergroups and generalized birth and death random walks. Using central limit theorems derived in a recent paper by the author and some martingale arguments, we here prove a law of the iterated logarithm for a class of such Markov chains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号