首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A variety of continuous parameter Markov chains arising in applied probability (e.g. epidemic and chemical reaction models) can be obtained as solutions of equations of the form
XN(t)=x0+∑1NlY1N ∫t0 f1(XN(s))ds
where l∈Zt, the Y1 are independent Poisson processes, and N is a parameter with a natural interpretation (e.g. total population size or volume of a reacting solution).The corresponding deterministic model, satisfies
X(t)=x0+ ∫t0 ∑ lf1(X(s))ds
Under very general conditions limN→∞XN(t)=X(t) a.s. The process XN(t) is compared to the diffusion processes given by
ZN(t)=x0+∑1NlB1N∫t0 ft(ZN(s))ds
and
V(t)=∑ l∫t0f1(X(s))dW?1+∫t0 ?F(X(s))·V(s)ds.
Under conditions satisfied by most of the applied probability models, it is shown that XN,ZN and V can be constructed on the same sample space in such a way that
XN(t)=ZN(t)+OlogNN
and
N(XN(t)?X(t))=V(t)+O log NN
  相似文献   

3.
Discrete Markov chains are applied widely for analysis and design of high speed ATM networks due to their essentially discrete nature. Unfortunately, their use is precluded for many important problems due to explosion of the state space cardinality. In this paper we propose a new method for approximation of a discrete Markov chain by a chain of considerably smaller dimension which is based on the duality theory of optimization. A novel feature of our approach is that it provides guaranteed upper and lower bounds for the performance indices defined on the steady state distribution of the original system. We apply our method to the problem of engineering multiplexers for ATM networks.  相似文献   

4.
给出了Csiszar和Krner关于独立随机变量序列的一个定理的一个推广,该定理的推论是关于相对熵的,在统计假设检验及编码理论中起着重要的作用.利用非齐次马氏链的一个强大数定律将这个定理推广到非齐次马氏链上.  相似文献   

5.
6.
A fluctuation theory for Markov chains on an ordered countable state space is developed, using ladder processes. These are shown to be Markov renewal processes. Results are given for the joint distribution of the extremum (maximum or minimum) and the first time the extremum is achieved. Also a new classification of the states of a Markov chain is suggested. Two examples are given.  相似文献   

7.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

8.
Motivated by various applications, we describe the scaling limits of bivariate Markov chains (X,J) on Z+×{1,,κ} where X can be viewed as a position marginal and {1,,κ} is a set of κ types. The chain starts from an initial value (n,i)Z+×{1,,κ}, with i fixed and n, and typically we will assume that the macroscopic jumps of the marginal X are rare, i.e. arrive with a probability proportional to a negative power of the current state. We also assume that X is non-increasing. We then observe different asymptotic regimes according to whether the rate of type change is proportional to, faster than, or slower than the macroscopic jump rate. In these different situations, we obtain in the scaling limit Lamperti transforms of Markov additive processes, that sometimes reduce to standard positive self-similar Markov processes. As first examples of applications, we study the number of collisions in coalescents in varying environment and the scaling limits of Markov random walks with a barrier. This completes previous results obtained by Haas and Miermont (2011) and Bertoin and Kortchemski (2016) in the monotype setting. In a companion paper, we will use these results as a building block to study the scaling limits of multi-type Markov branching trees, with applications to growing models of random trees and multi-type Galton–Watson trees.  相似文献   

9.
Necessary and sufficient conditions are given for a Markov chain to be R-recurrent and satisfy the Strong Ratio Limit Property, and for a Markov Chain to be R-positive-recurrent.  相似文献   

10.
11.
Let r be an integer not less than 2. Suppose that we have a (not necessarily homogeneous) Markov chain with state space {0,1,…,r−1} given by the sequence of r×r transition matrices
  相似文献   

12.
Kingman and Williams [6] showed that a pattern of positive elements can occur in a transition matrix of a finite state, nonhomogeneous Markov chain if and only if it may be expressed as a finite product of reflexive and transitive patterns. In this paper we solve a similar problem for doubly stochastic chains. We prove that a pattern of positive elements can occur in a transition matrix of a doubly stochastic Markov chain if and only if it may be expressed as a finite product of reflexive, transitive, and symmetric patterns. We provide an algorithm for determining whether a given pattern may be expressed as a finite product of reflexive, transitive, and symmetric patterns. This result has implications for the embedding problem for doubly stochastic Markov chains. We also give the application of the obtained characterization to the chain majorization.  相似文献   

13.
14.
Let be a homogeneous Markov chain on an unbounded Borel subset of with a drift function which tends to a limit at infinity. Under a very simple hypothesis on the chain we prove that converges in distribution to a normal law where the variance depends on the asymptotic behaviour of . When goes to zero quickly enough and , the random centering may be replaced by These results are applied to the case of random walks on some hypergroups.

  相似文献   


15.
Conditions are obtained, as well as quantitative estimates, of Markov chains with a common set of states that are uniformly continuous in time. Within the framework of the method under consideration, it is possible to show that such chains can be finitely approximated.Translated from Problemy Ustoichivosti Stokhasticheskikh Modelei — Trudy Seminara, pp. 4–12, 1980.  相似文献   

16.
The following modification of a general state space discrete-time Markov chain is considered: certain transitions are supposed “forbidden” and the chain evolves until there is such a transition. At this instant the value of the chain is “replaced” according to a given rule, and, starting from the new value, the chain evolves normally until there is a forbidden transition again; the cycle is then repeated. The relationship of this modified process to the original one is studied in general terms, with particular emphasis being given to invariant measures. Examples are given which illustrate the results obtained.  相似文献   

17.
18.
The aim of this paper is to examine multiple Markov dependence for the discrete as well as for the continuous parameter case. In both cases the Markov property with arbitrary parameter values is investigated and it is shown that it leads to the degeneration of the multiple Markov dependence to the simple one.  相似文献   

19.
Summary A technique is presented, which enables the state space of a Harris recurrent Markov chain to be split in a way, which introduces into the split state space an atom. Hence the full force of renewal theory can be used in the analysis of Markov chains on a general state space. As a first illustration of the method we show how Derman's construction for the invariant measure works in the general state space. The Splitting Technique is also applied to the study of sums of transition probabilities.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号