首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Let \s{Xn, n ? 0\s} and \s{Yn, n ? 0\s} be two stochastic processes such that Yn depends on Xn in a stationary manner, i.e. P(Yn ? A\vbXn) does not depend on n. Sufficient conditions are derived for Yn to have a limiting distribution. If Xn is a Markov chain with stationary transition probabilities and Yn = f(Xn,..., Xn+k) then Yn depends on Xn is a stationary way. Two situations are considered: (i) \s{Xn, n ? 0\s} has a limiting distribution (ii) \s{Xn, n ? 0\s} does not have a limiting distribution and exits every finite set with probability 1. Several examples are considered including that of a non-homogeneous Poisson process with periodic rate function where we obtain the limiting distribution of the interevent times.  相似文献   

2.
A strongly ergodic non-homogeneous Markov chain is considered in the paper. As an analog of the Poisson limit theorem for a homogeneous Markov chain recurring to small cylindrical sets, a Poisson limit theorem is given for the non-homogeneous Markov chain. Meanwhile, some interesting results about approximation independence and probabilities of small cylindrical sets are given.  相似文献   

3.
This paper presents a unified approach for the study of the exact distribution (probability mass function, mean, generating functions) of three types of random variables: (a) variables related to success runs in a sequence of Bernoulli trials (b) scan statistics, i.e. variables enumerating the moving windows in a linearly ordered sequence of binary outcomes (success or failure) which contain prescribed number of successes and (c) success run statistics related to several well known urn models. Our approach is based on a Markov chain imbedding which permits the construction of probability vectors satisfying triangular recurrence relations. The results presented here cover not only the case of identical and independently distributed Bernoulli variables, but the non-identical case as well. An extension to models exhibiting Markov dependence among the successive trials is also discussed in brief.  相似文献   

4.
Starting from a real-valued Markov chain X0,X1,…,Xn with stationary transition probabilities, a random element {Y(t);t[0, 1]} of the function space D[0, 1] is constructed by letting Y(k/n)=Xk, k= 0,1,…,n, and assuming Y (t) constant in between. Sample tightness criteria for sequences {Y(t);t[0,1]};n of such random elements in D[0, 1] are then given in terms of the one-step transition probabilities of the underlying Markov chains. Applications are made to Galton-Watson branching processes.  相似文献   

5.
We propose a new method for the analysis of lot-per-lot inventory systems with backorders under rationing. We introduce an embedded Markov chain that approximates the state-transition probabilities. We provide a recursive procedure for generating these probabilities and obtain the steady-state distribution.  相似文献   

6.
7.
We consider how to identify the transition rates of ion channels with the underlying scheme which is kinetically modelled as time-homogeneous Markov chain. A Markov chain inversion approach is developed to perform a difficult inversion to identify the transition rates from the parameters characterizing the lifetime distributions at a small number of states, although it is straightforward to derive the lifetime distribution. The general explicit equations relating the parameters of the lifetime distribution to the transition rates are derived and transition rates are then obtained as roots to this system of equations. The concrete solutions are proposed to the basic and regular schemes such as linear, star-graph branch and loop. Useful conclusions and solutions to realistic schemes are also included to show its efficiency.  相似文献   

8.
9.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

10.
We consider a discrete-time Markov chain on the non-negative integers with drift to infinity and study the limiting behavior of the state probabilities conditioned on not having left state 0 for the last time. Using a transformation, we obtain a dual Markov chain with an absorbing state such that absorption occurs with probability 1. We prove that the state probabilities of the original chain conditioned on not having left state 0 for the last time are equal to the state probabilities of its dual conditioned on non-absorption. This allows us to establish the simultaneous existence, and then equivalence, of their limiting conditional distributions. Although a limiting conditional distribution for the dual chain is always a quasi-stationary distribution in the usual sense, a similar statement is not possible for the original chain.  相似文献   

11.
We focus on continuous Markov chains as a model to describe the evolution of credit ratings. In this work it is checked whether a simple, tridiagonal type of generator provides a good approximation to a general one. Three different tridiagonal approximations are proposed and their performance is checked against two generators, corresponding to a volatile and a stable period, respectively. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
The evolution of a closed discrete-time homogeneous Markov system (HMS) is determined by the evolution of its state sizes in time. In order to examine the variability of the state sizes, their moments are evaluated for any time point, and recursive formulae for their computation are derived. As a consequence the asymptotic values of the moments for a convergent HMS can be evaluated. The respective recursive formula for a HMS with periodic transition matrix is given. The p.d.f.’s of the state sizes follow directly by means of the moments. The theoretical results are illustrated by a numerical example. This research was partially supported by the State Scholarships Foundation of Greece.  相似文献   

13.
给出了Markov链中某状态集的生存时间和死亡时间的分布(均是混合指数分布),以及其分布的各阶导数与转移速率之间的约束关系.利用它们证明了:对于星形分枝Markov链离子通道,其全部转移速率能够通过中心状态及其相邻状态的生存时间和死亡时间的分布唯一确定,给出了相应的算法,并例证该算法的正确性和有效性.  相似文献   

14.
树指标马氏链的等价定义   总被引:1,自引:0,他引:1  
国内外关于树指标随机过程的研究已经取得了一定的成果.Benjamini和Peres首先给出了树指标马氏链的定义.Berger和叶中行研究了齐次树图上平稳随机场熵率的存在性.杨卫国与刘文研究了树上马氏场的强大数定律与渐近均分性.杨卫国又研究了一般树指标马氏链的强大数定律.为了以后更有效的研究树指标随机过程的一系列相关问题,本文在分析研究前人成果的基础上,给出了树指标马氏链的等价定义,并用数学归纳法证明了其等价性.  相似文献   

15.
Let X be a chain with discrete state space I, and V be the matrix of entries Vi,n, where Vi,n denotes the position of the process immediately after the nth visit to i. We prove that the law of X is a mixture of laws of Markov chains if and only if the distribution of V is invariant under finite permutations within rows (i.e., the Vi,n's are partially exchangeable in the sense of de Finetti). We also prove that an analogous statement holds true for mixtures of laws of Markov chains with a general state space and atomic kernels. Going back to the discrete case, we analyze the relationships between partial exchangeability of V and Markov exchangeability in the sense of Diaconis and Freedman. The main statement is that the former is stronger than the latter, but the two are equivalent under the assumption of recurrence. Combination of this equivalence with the aforesaid representation theorem gives the Diaconis and Freedman basic result for mixtures of Markov chains.  相似文献   

16.
In this paper, first we introduce a new tensor product for a transition probability tensor originating from a higher‐order Markov chain. Subsequently, some properties of the new tensor product are explained, and its relationship with the stationary probability vector is studied. Also, similarity between results obtained by this new product and the first‐order case is shown. Furthermore, we prove the convergence of a transition probability tensor to the stationary probability vector. Finally, we show how to achieve a stationary probability vector with some numerical examples and make some comparison between the proposed method and another existing method for obtaining stationary probability vectors. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
The optimal-stopping problem in a partially observable Markov chain is considered, and this is formulated as a Markov decision process. We treat a multiple stopping problem in this paper. Unlike the classical stopping problem, the current state of the chain is not known directly. Information about the current state is always available from an information process. Several properties about the value and the optimal policy are given. For example, if we add another stop action to thek-stop problem, the increment of the value is decreasing ink.The author wishes to thank Professor M. Sakaguchi of Osaka University for his encouragement and guidance. He also thanks the referees for their careful readings and helpful comments.  相似文献   

18.
This paper deals with the asymptotic optimality of a stochastic dynamic system driven by a singularly perturbed Markov chain with finite state space. The states of the Markov chain belong to several groups such that transitions among the states within each group occur much more frequently than transitions among the states in different groups. Aggregating the states of the Markov chain leads to a limit control problem, which is obtained by replacing the states in each group by the corresponding average distribution. The limit control problem is simpler to solve as compared with the original one. A nearly-optimal solution for the original problem is constructed by using the optimal solution to the limit problem. To demonstrate, the suggested approach of asymptotic optimal control is applied to examples of manufacturing systems of production planning.  相似文献   

19.
We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.  相似文献   

20.
In this paper a new estimator for the transition density π of an homogeneous Markov chain is considered. We introduce an original contrast derived from regression framework and we use a model selection method to estimate π under mild conditions. The resulting estimate is adaptive with an optimal rate of convergence over a large range of anisotropic Besov spaces . Some simulations are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号