首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study the properties of finite ergodic Markov Chains whose transition probability matrix P is singular. The results establish bounds on the convergence time of Pm to a matrix where all the rows are equal to the stationary distribution of P. The results suggest a simple rule for identifying the singular matrices which do not have a finite convergence time. We next study finite convergence to the stationary distribution independent of the initial distribution. The results establish the connection between the convergence time and the order of the minimal polynomial of the transition probability matrix. A queuing problem and a maintenance Markovian decision problem which possess the property of rapid convergence are presented.  相似文献   

2.
3.
4.
Necessary and sufficient conditions are given for the convergence of the first moment of functionals of Markov chains.  相似文献   

5.
We obtain an estimate for the rate of convergence of normalized Poisson sums of random variables determined by the first-order autoregression procedure to a family of Wiener processes. __________ Translated from Ukrains’kyi Matematychnyi Zhurnal, Vol. 58, No. 9, pp. 1155–1174, September, 2006.  相似文献   

6.
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M (≥ 2) states.  相似文献   

7.
Consider a finite state irreducible Markov reward chain. It is shown that there exist simulation estimates and confidence intervals for the expected first passage times and rewards as well as the expected average reward, with 100% coverage probability. The length of the confidence intervals converges to zero with probability one as the sample size increases; it also satisfies a large deviations property.  相似文献   

8.
9.
In this paper circuit chains of superior order are defined as multiple Markov chains for which transition probabilities are expressed in terms of the weights of a finite class of circuits in a finite set, in connection with kinetic properties along the circuits. Conversely, it is proved that if we join any finite doubly infinite strictly stationary Markov chain of order r for which transitions hold cyclically with a second chain with the same transitions for the inverse time-sense, then they may be represented as circuit chains of order r.  相似文献   

10.
Discrete time Markov chains with interval probabilities   总被引:1,自引:0,他引:1  
The parameters of Markov chain models are often not known precisely. Instead of ignoring this problem, a better way to cope with it is to incorporate the imprecision into the models. This has become possible with the development of models of imprecise probabilities, such as the interval probability model. In this paper we discuss some modelling approaches which range from simple probability intervals to the general interval probability models and further to the models allowing completely general convex sets of probabilities. The basic idea is that precisely known initial distributions and transition matrices are replaced by imprecise ones, which effectively means that sets of possible candidates are considered. Consequently, sets of possible results are obtained and represented using similar imprecise probability models.We first set up the model and then show how to perform calculations of the distributions corresponding to the consecutive steps of a Markov chain. We present several approaches to such calculations and compare them with respect to the accuracy of the results. Next we consider a generalisation of the concept of regularity and study the convergence of regular imprecise Markov chains. We also give some numerical examples to compare different approaches to calculations of the sets of probabilities.  相似文献   

11.
12.
This paper develops bounds on the rate of decay of powers of Markov kernels on finite state spaces. These are combined with eigenvalue estimates to give good bounds on the rate of convergence to stationarity for finite Markov chains whose underlying graph has moderate volume growth. Roughly, for such chains, order (diameter) steps are necessary and suffice to reach stationarity. We consider local Poincaré inequalities and use them to prove Nash inequalities. These are bounds onl 2-norms in terms of Dirichlet forms andl 1-norms which yield decay rates for iterates of the kernel. This method is adapted from arguments developed by a number of authors in the context of partial differential equations and, later, in the study of random walks on infinite graphs. The main results do not require reversibility.  相似文献   

13.
We consider a class of Markov decision processes withfinite state and action spaces which, essentially, is determined by the following condition: The state space isirreducible under the action of any stationary policy. However, except by this restriction, the transition law iscompletely unknown to the controller. In this context, we find a set of policies under which thefrequency estimators of the transition law are strongly consistent and then, this result is applied to constructadaptive asymptotically discount-optimal policies.Dedicated to Professor Truman O. Lewis, on the occasion of his sixtieth birthdayThis research was supported in part by the Third World Academy of Sciences (TWAS) under Grant TWAS RG MP 898-152, and in part by the Consejo Nacional de Ciencia y Tecnología (CONACYT) under Grant A128CCOEO550 (MT-2).  相似文献   

14.
Spectral analysis of finite Markov chains with spherical symmetries   总被引:2,自引:0,他引:2  
We generalize the classical Fourier analysis of Gelfand pairs to the setting of groups acting not transitively on a set X. We use this analysis to determine the spectrum of several random walks on graphs. Moreover, as byproduct, we show that, for a new urn diffusion model, the cut-off phenomenon holds.  相似文献   

15.
Conditions for the finiteness of long run costs and rewards associated with infinite recurrent Markov chains that may be discrete or continuous in time are considered. Without resorting to results from the theory of Markov processes on general state spaces we provide instructive proofs in the course of which we derive auxiliary results that are of interest in themselves. Potential applications of the finiteness conditions are outlined in order to elucidate their high practical relevance.  相似文献   

16.
17.
18.
19.
Two kinds of eigentime identity for asymmetric finite Markov chains are proved both in the ergodic case and the transient case.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号