首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

We introduce the concepts of lumpability and commutativity of a continuous time discrete state space Markov process, and provide a necessary and sufficient condition for a lumpable Markov process to be commutative. Under suitable conditions we recover some of the basic quantities of the original Markov process from the jump chain of the lumped Markov process.  相似文献   

2.
We study a PH/G/1 queue in which the arrival process and the service times depend on the state of an underlying Markov chain J(t) on a countable state spaceE. We derive the busy period process, waiting time and idle time of this queueing system. We also study the Markov modulated EK/G/1 queueing system as a special case.  相似文献   

3.
We consider time‐homogeneous Markov chains with state space Ek≡{0,1,…,k} and initial distribution concentrated on the state 0. For pairs of such Markov chains, we study the Stochastic Tail Order and the stochastic order in the usual sense between the respective first passage times in the state k . On this purpose, we will develop a method based on a specific relation between two stochastic matrices on the state space Ek . Our method provides comparisons that are simpler and more refined than those obtained by the analysis based on the spectral gaps. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Let X be an ergodic Markov chain on a finite state space S0 and let s and t be finite sequences of elements from S0. We give an easily computable formula for the expected time of completing t, given that s was just observed. If A0 is a finite set of such sequences, we show how that formula may be used to compute the hitting distribution on A0.  相似文献   

5.
Data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo algorithm. In this paper, an alternative to DA algorithm is proposed. It is shown that the modified Markov chain is always more efficient than DA in the sense that the asymptotic variance in the central limit theorem under the alternative chain is no larger than that under DA. The modification is based on Peskun’s (Biometrika 60:607–612, 1973) result which shows that asymptotic variance of time average estimators based on a finite state space reversible Markov chain does not increase if the Markov chain is altered by increasing all off-diagonal probabilities. In the special case when the state space or the augmentation space of the DA chain is finite, it is shown that Liu’s (Biometrika 83:681–682, 1996) modified sampler can be used to improve upon the DA algorithm. Two illustrative examples, namely the beta-binomial distribution, and a model for analyzing rank data are used to show the gains in efficiency by the proposed algorithms.  相似文献   

6.
Let C be a collection of particles, each of which is independently undergoing the same Markov chain, and let d be a metric on the state space. Then, using transition probabilities, for distinct p, q in C, any time t and real x, we can calculate F pq (t) (x) = Pr [d (p,q)t]. For each time t 0, the collection C is shown to be a probabilistic metric space under the triangle function . In this paper we study the structure and limiting behavior of PM spaces so constructed. We show that whenever the transition probabilities have non-degenerate limits then the limit of the family of PM spaces exists and is a PM space under the same triangle function. For an irreducible, aperiodic, positive recurrent Markov chain, the limiting PM space is equilateral. For an irreducible, positive recurrent Markov chain with period p, the limiting PM space has at most only [p/2]+2 distinct distance distribution functions. Finally, we exhibit a class of Markov chains in which all of the states are transient, so that P ij(t)0 for all states i, j, but for which the {F pq tt } all have non-trivial limits and hence a non-trivial limiting PM space does exist.  相似文献   

7.
We consider a sequence X 1, ..., X n of r.v.'s generated by a stationary Markov chain with state space A = {0, 1, ..., r}, r 1. We study the overlapping appearances of runs of k i consecutive i's, for all i = 1, ..., r, in the sequence X 1,..., X n. We prove that the number of overlapping appearances of the above multiple runs can be approximated by a Compound Poisson r.v. with compounding distribution a mixture of geometric distributions. As an application of the previous result, we introduce a specific Multiple-failure mode reliability system with Markov dependent components, and provide lower and upper bounds for the reliability of the system.  相似文献   

8.
Summary The basic problem considered in this paper is that of determining conditions for recurrence and transience for two dimensional irreducible Markov chains whose state space is Z + 2 =Z+xZ+. Assuming bounded jumps and a homogeneity condition Malyshev [7] obtained necessary and sufficient conditions for recurrence and transience of two dimensional random walks on the positive quadrant. Unfortunately, his hypothesis that the jumps of the Markov chain be bounded rules out for example, the Poisson arrival process. In this paper we generalise Malyshev's theorem by means of a method that makes novel use of the solution to Laplace's equation in the first quadrant satisfying an oblique derivative condition on the boundaries. This method, which allows one to replace the very restrictive boundedness condition by a moment condition and a lower boundedness condition, is of independent interest.  相似文献   

9.
A measure of the “mixing time” or “time to stationarity” in a finite irreducible discrete time Markov chain is considered. The statistic , where {πj} is the stationary distribution and mij is the mean first passage time from state i to state j of the Markov chain, is shown to be independent of the initial state i (so that ηi = η for all i), is minimal in the case of a periodic chain, yet can be arbitrarily large in a variety of situations. An application considering the effects perturbations of the transition probabilities have on the stationary distributions of Markov chains leads to a new bound, involving η, for the 1-norm of the difference between the stationary probability vectors of the original and the perturbed chain. When η is large the stationary distribution of the Markov chain is very sensitive to perturbations of the transition probabilities.  相似文献   

10.
In this paper, we study the problem of a variety of p, onlinear time series model Xn+ 1= TZn+1(X(n), … ,X(n - Zn+l), en+1(Zn+1)) in which {Zn} is a Markov chain with finite state space, and for every state i of the Markov chain, {en(i)} is a sequence of independent and identically distributed random variables. Also, the limit behavior of the sequence {Xn} defined by the above model is investigated. Some new novel results on the underlying models are presented.  相似文献   

11.

We provide several necessary and sufficient conditions for a Markov chain on a general state space to be positive Harris recurrent. The conditions only concern asymptotic properties of the expected occupation measures.

  相似文献   


12.
朱志锋  张绍义 《数学学报》2019,62(2):287-292
该文在一般状态空间下研究马氏链指数遍历性,指数遍历马氏链,增加条件π(f~p)<∞, p> 1,利用耦合方法得到了存在满的吸收集,使得马氏链在其上是f-指数遍历的.  相似文献   

13.
We first give an extension of a theorem of Volkonskii and Rozanov characterizing the strictly stationary random sequences satisfying ‘absolute regularity’. Then a strictly stationary sequence {Xk, k = …, ?1, 0, 1,…} is constructed which is a 0?1 instantaneous function of an aperiodic Markov chain with countable irreducible state space, such that n?2 var (X1 + ? + Xn) approaches 0 arbitrarily slowly as n → ∞ and (X1 + ? + Xn) is partially attracted to every infinitely divisible law.  相似文献   

14.
It is often possible to speed up the mixing of a Markov chain \(\{ X_{t} \}_{t \in \mathbb {N}}\) on a state space \(\Omega \) by lifting, that is, running a more efficient Markov chain \(\{ \widehat{X}_{t} \}_{t \in \mathbb {N}}\) on a larger state space \(\hat{\Omega } \supset \Omega \) that projects to \(\{ X_{t} \}_{t \in \mathbb {N}}\) in a certain sense. Chen et al. (Proceedings of the 31st annual ACM symposium on theory of computing. ACM, 1999) prove that for Markov chains on finite state spaces, the mixing time of any lift of a Markov chain is at least the square root of the mixing time of the original chain, up to a factor that depends on the stationary measure of \(\{X_t\}_{t \in \mathbb {N}}\). Unfortunately, this extra factor makes the bound in Chen et al. (1999) very loose for Markov chains on large state spaces and useless for Markov chains on continuous state spaces. In this paper, we develop an extension of the evolving set method that allows us to refine this extra factor and find bounds for Markov chains on continuous state spaces that are analogous to the bounds in Chen et al. (1999). These bounds also allow us to improve on the bounds in Chen et al. (1999) for some chains on finite state spaces.  相似文献   

15.
Motivated by queueing systems playing a key role in the performance evaluation of telecommunication networks, we analyze in this paper the stationary behavior of a fluid queue, when the instantaneous input rate is driven by a continuous-time Markov chain with finite or infinite state space. In the case of an infinite state space and for particular classes of Markov chains with a countable state space, such as quasi birth and death processes or Markov chains of the G/M/1 type, we develop an algorithm to compute the stationary probability distribution function of the buffer level in the fluid queue. This algorithm relies on simple recurrence relations satisfied by key characteristics of an auxiliary queueing system with normalized input rates.   相似文献   

16.
We argue that the spectral theory of non-reversible Markov chains may often be more effectively cast within the framework of the naturally associated weighted-L space ${L_\infty^V}$ , instead of the usual Hilbert space L 2?=?L 2(π), where π is the invariant measure of the chain. This observation is, in part, based on the following results. A discrete-time Markov chain with values in a general state space is geometrically ergodic if and only if its transition kernel admits a spectral gap in ${L_\infty^V}$ . If the chain is reversible, the same equivalence holds with L 2 in place of ${L_\infty^V}$ . In the absence of reversibility it fails: There are (necessarily non-reversible, geometrically ergodic) chains that admit a spectral gap in ${L_\infty^V}$ but not in L 2. Moreover, if a chain admits a spectral gap in L 2, then for any ${h\in L_2}$ there exists a Lyapunov function ${V_h\in L_1}$ such that V h dominates h and the chain admits a spectral gap in ${L_\infty^{V_h}}$ . The relationship between the size of the spectral gap in ${L_\infty^V}$ or L 2, and the rate at which the chain converges to equilibrium is also briefly discussed.  相似文献   

17.
In this paper, we prove that the Foster–Lyapunov drift condition is necessary and sufficient for recurrence of a Markov chain on a general state space. And then an answer in the affirmative for the question presented in the monograph of Meyn and Tweedie (Markov chains and stochastic stability. Springer, Berlin, 1993, P175) is obtained.  相似文献   

18.
We construct different classes of lumpings for a family of Markov chain products which reflect the structure of a given finite poset. We essentially use combinatorial methods. We prove that, for such a product, every lumping can be obtained from the action of a suitable subgroup of the generalized wreath product of symmetric groups, acting on the underlying poset block structure, if and only if the poset defining the Markov process is totally ordered, and one takes the uniform Markov operator in each factor state space. Finally we show that, when the state space is a homogeneous space associated with a Gelfand pair, the spectral analysis of the corresponding lumped Markov chain is completely determined by the decomposition of the group action into irreducible submodules.  相似文献   

19.
Summary A continuous-parameter Markov process on a general state space has transition function P t (x,E). The theory of regenerative phenomena is applied to the question: what functions of t can arise in this way? Particular attention is paid to processes of purely discontinuous type, to which are extended known results for processes with a countable state space.  相似文献   

20.
In this paper we consider the field of local times of a discrete-time Markov chain on a general state space, and obtain uniform (in time) upper bounds on the total variation distance between this field and the one of a sequence of n i.i.d. random variables with law given by the invariant measure of that Markov chain. The proof of this result uses a refinement of the soft local time method of Popov and Teixeira (2015).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号