首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
3.
4.
For a two-state Markov chain explicit results are derived for the distribution of the number of visits to state j during the time-interval (1,n], given that the initial state (at time 0) was i. The proof is based on combinatorial results of partition theory.  相似文献   

5.
The results of the author's preceding paper are carried over to a larger class of Markov chains. The Markov property of the dwelling time field is proved for chains with continuous time.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 158, pp. 39–44, 1987.  相似文献   

6.
A notion of ergodicity is defined by analogy to homogeneous chains, and a necessary and sufficient condition for it to hold for an inhomogeneous Markov chain is given in terms of matrix products. A comparison to the situation for homogeneous chains is made. A final section discusses the better-known notion of strong ergodicity in relation to the geometric convergence rate.  相似文献   

7.
We give simple proofs of large deviation theorems for the occupation measure of a Markov chain using a regeneration argument to establish existence and convexity theory to identify the rate function.  相似文献   

8.
Upper and lower bounds for the expected first passage time of a distant point are obtained for ergodic countable Markov chains in terms of Lyapunov function and stationary distribution.  相似文献   

9.
We study risk-sensitive control of continuous time Markov chains taking values in discrete state space. We study both finite and infinite horizon problems. In the finite horizon problem we characterize the value function via Hamilton Jacobi Bellman equation and obtain an optimal Markov control. We do the same for infinite horizon discounted cost case. In the infinite horizon average cost case we establish the existence of an optimal stationary control under certain Lyapunov condition. We also develop a policy iteration algorithm for finding an optimal control.  相似文献   

10.
We study infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled continuous time Markov chains on a countable state space. For the discounted-cost game, we prove the existence of value and saddle-point equilibrium in the class of Markov strategies under nominal conditions. For the ergodic-cost game, we prove the existence of values and saddle point equilibrium by studying the corresponding Hamilton-Jacobi-Isaacs equation under a certain Lyapunov condition.  相似文献   

11.
In this paper the notion of variance bounding introduced by Roberts and Rosenthal (2008) is extended to continuous time Markov Chains. Moreover, it is proven that, as in the discrete time case, the notion of variance bounding for reversible Markov Chains is equivalent to the existence of a central limit theorem. A connection with the continuous time Peskun ordering, introduced by Leisen and Mira (2008), concludes the paper.  相似文献   

12.
Summary Given a Markov chain (X n ) n0, random times are studied which are birth times or death times in the sense that the post- and pre- processes are independent given the present (X –1, X ) at time and the conditional post- process (birth times) or the conditional pre- process (death times) is again Markovian. The main result for birth times characterizes all time substitutions through homogeneous random sets with the property that all points in the set are birth times. The main result for death times is the dual of this and appears as the birth time theorem with the direction of time reversed.Part of this work was done while the author was visiting the Department of Mathematics, University of California at San DiegoThe support of The Danish Natural Science Research Council is gratefully acknowledged  相似文献   

13.
A class of models called interactive Markov chains is studied in both discrete and continuous time. These models were introduced by Conlisk and serve as a rich class for sociological modeling, because they allow for interactions among individuals. In discrete time, it is proved that the Markovian processes converge to a deterministic process almost surely as the population size becomes infinite. More importantly, the normalized process is shown to be asymptotically normal with specified mean vector and covariance matrix. In continuous time, the chain is shown to converge weakly to a diffusion process with specified drift and scale terms. The distributional results will allow for the construction of a likelihood function from interactive Markov chain data, so these results will be important for questions of statistical inference. An example from manpower planning is given which indicates the use of this theory in constructing and evaluating control policies for certain social systems.  相似文献   

14.
IfP is a transition matrix of a Markov chain, and is derived by perturbing the elements ofP, then we find conditions such that is also positive recurrent whenP is, and relate the invariant probability measures for the two. Similar results are found for recurrence of chains, and the methods then yield analogues for continuous time processes also. CSIRO  相似文献   

15.
In the paper we introduce stopping times for quantum Markov states. We study algebras and maps corresponding to stopping times, give a condition of strong Markov property and give classification of projections for the property of accessibility. Our main result is a new recurrence criterium in terms of stopping times (Theorem 1 and Corollary 2). As an application of the criterium we study how, in Section 6, the quantum Markov chain associated with the one-dimensional Heisenberg (usually non-Markovian) process, obtained from this quantum Markov chain by restriction to a diagonal subalgebra, is such that all its states are recurrent. We were not able to obtain this result from the known recurrence criteria of classical probability.Supported by GNAFA-CNR, Bando n. 211.01.25.  相似文献   

16.
17.
Strong law of large numbers for countable nonhomogeneous Markov chains   总被引:1,自引:0,他引:1  
The aim of this paper is to establish a strong law of large numbers for the bivariate functions of countable nonhomogeneous Markov chains under the condition of uniform convergence in the Cesàro sense which differs from my previous results. As corollaries, we generalize one of the Liu and Liu’s results for the univariate functions case and obtain another Shannon–McMillan–Breiman theorem for this Markov chains.  相似文献   

18.
Statistical Inference for Stochastic Processes - In this article, the maximum spacing (MSP) method is extended to continuous time Markov chains and semi-Markov processes and consistency of the MSP...  相似文献   

19.
We study the properties of finite ergodic Markov Chains whose transition probability matrix P is singular. The results establish bounds on the convergence time of Pm to a matrix where all the rows are equal to the stationary distribution of P. The results suggest a simple rule for identifying the singular matrices which do not have a finite convergence time. We next study finite convergence to the stationary distribution independent of the initial distribution. The results establish the connection between the convergence time and the order of the minimal polynomial of the transition probability matrix. A queuing problem and a maintenance Markovian decision problem which possess the property of rapid convergence are presented.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号