首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A reduced system is a smaller system derived in the process of analyzing a larger system. While solving for steady-state probabilities of a Markov chain, generally the solution can be found by first solving a reduced system of equations which is obtained by appropriately partitioning the transition probability matrix. In this paper, we catagorize reduced systems as standard and nonstandard and explore the existence of reduced systems and their properties relative to the original system. We also discuss first passage probabilities and means for the standard reduced system relative to the original system. These properties are illustrated while determining the steady-state probabilities and first passage time characteristics of a queueing system.  相似文献   

2.
This paper is concerned with the stability of a preemptive priority queueing system with customer transfers. Conditions for the queueing system to be stable/unstable are found. An interesting result is that the stability/instability conditions are independent of the service rates of lower priority customers and the transfer rates.  相似文献   

3.
This paper considers the augmented truncation approximation of the generator of an ergodic continuous-time Markov chain with a countably infinite state space. The main purpose of this paper is to present bounds for the absolute difference between the stationary distributions of the original generator and its augmented truncation. As examples, we apply the bounds to an MMs retrial queue and an upper Hessenberg Markov chain.  相似文献   

4.
The characterization problem of a homogeneous Markov chain (with either finitely many or a countable number of states) is considered for the class of the variables satisfying the asymptotic independence. An allied result when some state distribution is given beforehand at some step is also presented.  相似文献   

5.
An inequality regarding the minimum ofP(lim inf(X n D n )) is proved for a class of random sequences. This result is related to the problem of sufficiency of Markov strategies for Markov decision processes with the Dubins-Savage criterion, the asymptotical behaviour of nonhomogeneous Markov chains, and some other problems.  相似文献   

6.
Summary A random timeT is a future independent time for a Markov chain (X n ) 0 ifT is independent of (X T+n ) n / =0 and if (X T+n ) n / =0 is a Markov chain with initial distribution and the same transition probabilities as (X n ) 0 . This concept is used (with the conditional stationary measure) to give a new and short proof of the basic limit theorem of Markov chains, improving somewhat the result in the null-recurrent case.This work was supported by the Swedish Natural Science Research Council and done while the author was visiting the Department of Statistics, Stanford University  相似文献   

7.
8.
9.
We first give an extension of a theorem of Volkonskii and Rozanov characterizing the strictly stationary random sequences satisfying ‘absolute regularity’. Then a strictly stationary sequence {Xk, k = …, ?1, 0, 1,…} is constructed which is a 0?1 instantaneous function of an aperiodic Markov chain with countable irreducible state space, such that n?2 var (X1 + ? + Xn) approaches 0 arbitrarily slowly as n → ∞ and (X1 + ? + Xn) is partially attracted to every infinitely divisible law.  相似文献   

10.
The concepts of π -irreduciblity, recurrence and transience are introduced into the research field of Markov chains in random environments. That a π -irreducible chain must be either recurrent or transient is proved, a criterion is shown for recurrent Markov chains in double-infinite random environments, the existence of invariant measure of π -irreducible chains in double-infinite environments is discussed, and then Orey’s open-questions are partially answered.  相似文献   

11.
In this paper, we propose several relaxation algorithms for solving the tensor equation arising from the higher‐order Markov chain and the multilinear PageRank. The semi‐symmetrization technique on the original equation is also employed to modify the proposed algorithms. The convergence analysis is given for the proposed algorithms. It is shown that the new algorithms are more efficient than the existing ones by some numerical experiments when relaxation parameters are chosen suitably.  相似文献   

12.
For simple random walk on aN-vertex graph, the mean time to cover all vertices is at leastcN log(N), wherec>0 is an absolute constant. This is deduced from a more general result about stationary finite-state reversible Markov chains. Under weak conditions, the covering time for such processes is at leastc times the covering time for the corresponding i.i.d. process.  相似文献   

13.
This work develops asymptotic expansions for solutions of systems of backward equations of time- inhomogeneous Maxkov chains in continuous time. Owing to the rapid progress in technology and the increasing complexity in modeling, the underlying Maxkov chains often have large state spaces, which make the computa- tional tasks ihfeasible. To reduce the complexity, two-time-scale formulations are used. By introducing a small parameter ε〉 0 and using suitable decomposition and aggregation procedures, it is formulated as a singular perturbation problem. Both Markov chains having recurrent states only and Maxkov chains including also tran- sient states are treated. Under certain weak irreducibility and smoothness conditions of the generators, the desired asymptotic expansions axe constructed. Then error bounds are obtained.  相似文献   

14.
We consider several multi-server retrial queueing models with exponential retrial times that arise in the literature of retrial queues. The effect of retrial rates on the behavior of the queue length process is investigated via sample path approach. We show that the number of customers in orbit and in the system as a whole are monotonically changed if the retrial rates in one system are bounded by the rates in second one. The monotonicity results are applied to show the convergence of generalized truncated systems that have been widely used for approximating the stationary queue length distribution in retrial queues. AMS subject classifications: Primary 60K25  相似文献   

15.
In this paper we construct an algorithm of successive approximation type that computes a locally optimal periodic policy for a Markov decision chain with partial state information. The algorithm is applied to a queueing network with server control.The research of this author has been supported by the Netherlands Organization for Scientific Research (N.W.O.).  相似文献   

16.
Conditions for the finiteness of long run costs and rewards associated with infinite recurrent Markov chains that may be discrete or continuous in time are considered. Without resorting to results from the theory of Markov processes on general state spaces we provide instructive proofs in the course of which we derive auxiliary results that are of interest in themselves. Potential applications of the finiteness conditions are outlined in order to elucidate their high practical relevance.  相似文献   

17.
In many applications of Markov chains, and especially in Markov chain Monte Carlo algorithms, the rate of convergence of the chain is of critical importance. Most techniques to establish such rates require bounds on the distribution of the random regeneration time T that can be constructed, via splitting techniques, at times of return to a “small set” C satisfying a minorisation condition P(x,·)(·), xC. Typically, however, it is much easier to get bounds on the time τC of return to the small set itself, usually based on a geometric drift function , where . We develop a new relationship between T and τC, and this gives a bound on the tail of T, based on ,λ and b, which is a strict improvement on existing results. When evaluating rates of convergence we see that our bound usually gives considerable numerical improvement on previous expressions.  相似文献   

18.
We derive sufficient conditions for ∝ λ (dx)6Pn(x, ·) - π6 to be of order o(ψ(n)-1), where Pn (x, A) are the transition probabilities of an aperiodic Harris recurrent Markov chain, π is the invariant probability measure, λ an initial distribution and ψ belongs to a suitable class of non-decreasing sequences. The basic condition involved is the ergodicity of order ψ, which in a countable state space is equivalent to Σ ψ(n)Pii?n} <∞ for some i, where τi is the hitting time of the tate i. We also show that for a general Markov chain to be ergodic of order ψ it suffices that a corresponding condition is satisfied by a small set.We apply these results to non-singular renewal measures on R providing a probabilisite method to estimate the right tail of the renewal measure when the increment distribution F satisfies ∝ tF(dt) 0; > 0 and ∝ ψ(t)(1- F(t))dt< ∞.  相似文献   

19.
Let MT be the mean first passage matrix for an n‐state ergodic Markov chain with a transition matrix T. We partition T as a 2×2 block matrix and show how to reconstruct MT efficiently by using the blocks of T and the mean first passage matrices associated with the non‐overlapping Perron complements of T. We present a schematic diagram showing how this method for computing MT can be implemented in parallel. We analyse the asymptotic number of multiplication operations necessary to compute MT by our method and show that, for large size problems, the number of multiplications is reduced by about 1/8, even if the algorithm is implemented in serial. We present five examples of moderate sizes (of orders 20–200) and give the reduction in the total number of flops (as opposed to multiplications) in the computation of MT. The examples show that when the diagonal blocks in the partitioning of T are of equal size, the reduction in the number of flops can be much better than 1/8. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

20.
Focusing on stochastic dynamics involve continuous states as well as discrete events, this article investigates stochastic logistic model with regime switching modulated by a singular Markov chain involving a small parameter. This Markov chain undergoes weak and strong interactions, where the small parameter is used to reflect rapid rate of regime switching among each state class. Two-time-scale formulation is used to reduce the complexity. We obtain weak convergence of the underlying system so that the limit has much simpler structure. Then we utilize the structure of limit system as a bridge, to invest stochastic permanence of original system driving by a singular Markov chain with a large number of states. Sufficient conditions for stochastic permanence are obtained. A couple of examples and numerical simulations are given to illustrate our results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号