首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 865 毫秒
1.
Zhao  Yiqiang Q.  Li  Wei  Braun  W. John 《Queueing Systems》1997,27(1-2):127-130
Heyman gives an interesting factorization of I-P, where P is the transition probability matrix for an ergodic Markov chain. We show that this factorization is valid if and only if the Markov chain is recurrent. Moreover, we provide a decomposition result which includes all ergodic, null recurrent as well as the transient Markov chains as special cases. Such a decomposition has been shown to be useful in the analysis of queues. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

2.
This article is concerned with Markov chains on m constructed by randomly choosing an affine map at each stage, and then making the transition from the current point to its image under this map. The distribution of the random affine map can depend on the current point (i.e., state of the chain). Sufficient conditions are given under which this chain is ergodic.  相似文献   

3.
Potential Theory for ergodic Markov chains (with a discrete state spare and a continuous parameter) is developed in terms of the fundamental matrix of a chain.A notion of an ergodic potential for a chain is introduced and a form of Riesz decomposition theorem for measures is proved. Ergodic potentials of charges (with total charge zero) are shown to play the role of Green potentials for transient chains.  相似文献   

4.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

5.
Let \((\xi _n)_{n=0}^\infty \) be a nonhomogeneous Markov chain taking values in a finite state-space \(\mathbf {X}=\{1,2,\ldots ,b\}\). In this paper, we will study the generalized entropy ergodic theorem with almost-everywhere and \(\mathcal {L}_1\) convergence for nonhomogeneous Markov chains; this generalizes the corresponding classical results for Markov chains.  相似文献   

6.
该文系统地介绍随机环境中的马尔可夫过程. 共4章, 第一章介绍依时的随机环境中的马尔可夫链(MCTRE), 包括MCTRE的存在性及等价描述; 状态分类; 遍历理论及不变测度; p-θ 链的中心极限定理和不变原理. 第二章介绍依时的随机环境中的马尔可夫过程(MPTRE), 包括MPTRE的基本概念; 随机环境中的q -过程存在唯一性; 时齐的q -过程;MPTRE的构造及等价性定理.第三章介绍依时的随机环境中的分枝链(MBCRE), 包括有限维的和无穷维的MBCRE的模型和基本概念; 它们的灭绝概念;两极分化; 增殖率等.第四章介绍依时依空的随机环境中的马尔可夫链(MCSTRE), 包括MCSTRE的基本概念、构造; 依时依空的随机环境中的随机徘徊(RWSTRE)的中心极限定理、不变原理.  相似文献   

7.
We consider ergodic backward stochastic differential equations in a discrete time setting, where noise is generated by a finite state Markov chain. We show existence and uniqueness of solutions, along with a comparison theorem. To obtain this result, we use a Nummelin splitting argument to obtain ergodicity estimates for a discrete time Markov chain which hold uniformly under suitable perturbations of its transition matrix. We conclude with an application of this theory to a treatment of an ergodic control problem.  相似文献   

8.
宋娟  张铭 《数学杂志》2016,36(5):1097-1102
本文研究了非时齐马氏过程的广义Dobrushin系数的估计问题.在将经典Dobrushin遍历系数推广为加权的遍历系数的基础上,利用了矩阵拆分的方法,得到了对这种广义遍历系数的估计方法,推广了时齐马氏过程关于遍历系数的估计结果,借此可进一步得到有关遍历性的判定结论.  相似文献   

9.
方舒 《数学研究》2010,43(1):55-66
给出二重非齐次马氏链的强遍历性,绝对平均强遍历性,Cesaro平均收敛的概念.利用二维马氏链的遍历性和C-K方程,建立了二维马氏链与二重非齐次马氏链遍历性的关系.并讨论了齐次二重马氏链绝对平均强遍历与强遍历的等价性.最后给出Cesaro平均收敛在马氏决策过程和信息论中应用.  相似文献   

10.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

11.
We prove that if a certain row of the transition probability matrix of a regular Markov chain is subtracted from the other rows of this matrix and then this row and the corresponding column are deleted, then the spectral radius of the matrix thus obtained is less than 1. We use this property of a regular Markov chain for the construction of an iterative process for the solution of the Howard system of equations, which appears in the course of investigation of controlled Markov chains with single ergodic class and, possibly, transient states.  相似文献   

12.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper circuit chains of superior order are defined as multiple Markov chains for which transition probabilities are expressed in terms of the weights of a finite class of circuits in a finite set, in connection with kinetic properties along the circuits. Conversely, it is proved that if we join any finite doubly infinite strictly stationary Markov chain of order r for which transitions hold cyclically with a second chain with the same transitions for the inverse time-sense, then they may be represented as circuit chains of order r.  相似文献   

14.
朱志锋  张绍义 《数学学报》2019,62(2):287-292
该文在一般状态空间下研究马氏链指数遍历性,指数遍历马氏链,增加条件π(f~p)<∞, p> 1,利用耦合方法得到了存在满的吸收集,使得马氏链在其上是f-指数遍历的.  相似文献   

15.
The ergodic theory of Markov chains in random environments   总被引:70,自引:0,他引:70  
Summary A general formulation of the stochastic model for a Markov chain in a random environment is given, including an analysis of the dependence relations between the environmental process and the controlled Markov chain, in particular the problem of feedback. Assuming stationary environments, the ergodic theory of Markov processes is applied to give conditions for the existence of finite invariant measure (equilibrium distributions) and to obtain ergodic theorems, which provide results on convergence of products of random stochastic matrices. Coupling theory is used to obtain results on direct convergence of these products and the structure of the tail -field. State properties including classification and communication properties are discussed.  相似文献   

16.
We consider an accessibility index for the states of a discrete-time, ergodic, homogeneous Markov chain on a finite state space; this index is naturally associated with the random walk centrality introduced by Noh and Reiger (2004) for a random walk on a connected graph. We observe that the vector of accessibility indices provides a partition of Kemeny’s constant for the Markov chain. We provide three characterizations of this accessibility index: one in terms of the first return time to the state in question, and two in terms of the transition matrix associated with the Markov chain. Several bounds are provided on the accessibility index in terms of the eigenvalues of the transition matrix and the stationary vector, and the bounds are shown to be tight. The behaviour of the accessibility index under perturbation of the transition matrix is investigated, and examples exhibiting some counter-intuitive behaviour are presented. Finally, we characterize the situation in which the accessibility indices for all states coincide.  相似文献   

17.
This paper develops exponential type upper bounds for scaled occupation measures of singularly perturbed Markov chains in discrete time. By considering two-time scale in the Markov chains, asymptotic analysis is carried out. The cases of the fast changing transition probability matrix is irreducible and that are divisible into l ergodic classes are examined first; the upper bounds of a sequence of scaled occupation measures are derived. Then extensions to Markov chains involving transient states and/or nonhomogeneous transition probabilities are dealt with. The results enable us to further our understanding of the underlying Markov chains and related dynamic systems, which is essential for solving many control and optimization problems.  相似文献   

18.
19.
The previously known works describing the generalization of least-square regularized regression algorithm are usually based on the assumption of independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by studying the generalization of least-square regularized regression algorithm with Markov chain samples. We first establish a novel concentration inequality for uniformly ergodic Markov chains, then we establish the bounds on the generalization of least-square regularized regression algorithm with uniformly ergodic Markov chain samples, and show that least-square regularized regression algorithm with uniformly ergodic Markov chains is consistent.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号