首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Efficient algorithms for finding steady state probabilities are presented and compared with the Gaussian elimination method for two special classes of finite state Markov chains. One class has block matrix steps and a possible jump of up to k block steps, and the other is a generalization of the class considered by Shanthikumar and Sargent where each element is a matrix.  相似文献   

2.
We develop some sufficient conditions for the usual stochastic ordering between hitting times, of a fixed state, for two finite Markov chains with the same state-space. Our attention will be focused on the so called skip-free case and, for the proof of our results, we develop a special type of coupling. We also analyze some special cases of applications in the frame of reliability degradation and of times to occurrence of words under random sampling of letters from a finite alphabet. As will be briefly discussed such fields give rise, in a natural way, to skip-free Markov chains.  相似文献   

3.
Kingman and Williams [6] showed that a pattern of positive elements can occur in a transition matrix of a finite state, nonhomogeneous Markov chain if and only if it may be expressed as a finite product of reflexive and transitive patterns. In this paper we solve a similar problem for doubly stochastic chains. We prove that a pattern of positive elements can occur in a transition matrix of a doubly stochastic Markov chain if and only if it may be expressed as a finite product of reflexive, transitive, and symmetric patterns. We provide an algorithm for determining whether a given pattern may be expressed as a finite product of reflexive, transitive, and symmetric patterns. This result has implications for the embedding problem for doubly stochastic Markov chains. We also give the application of the obtained characterization to the chain majorization.  相似文献   

4.
In this paper we consider Markov chains of the following type: the state space is the set of vertices of a connected, regular graph, and for each vertex transitions are to the adjacent vertices, which equal probabilities. The proof is given that the mean first-passage matrix F of such a Markov chain is symmetric, when the underlying graph is vertex-transitive. Hence, we can apply results from a previous paper, in which we investigated general, finite, ergodic Markov chains, with the property F= FT.  相似文献   

5.
For continuous-time Markov chains, we provide criteria for non-ergodicity, non-algebraic ergodicity, non-exponential ergodicity, and non-strong ergodicity. For discrete-time Markov chains, criteria for non-ergodicity, non-algebraic ergodicity, and non-strong ergodicity are given. Our criteria are in terms of the existence of solutions to inequalities involving the Q-matrix (or transition matrix P in time-discrete case) of the chain. Meanwhile, these practical criteria are applied to some examples, including a special class of single birth processes and several multi-dimensional models.  相似文献   

6.

The paper is devoted to studies of regularly and singularly perturbed Markov chains with damping component. In such models, a matrix of transition probabilities is regularised by adding a special damping matrix multiplied by a small damping (perturbation) parameter ε. We perform a detailed perturbation analysis for such Markov chains, particularly, give effective upper bounds for the rate of approximation for stationary distributions of unperturbed Markov chains by stationary distributions of perturbed Markov chains with regularised matrices of transition probabilities, asymptotic expansions for approximating stationary distributions with respect to damping parameter, explicit coupling type upper bounds for the rate of convergence in ergodic theorems for n-step transition probabilities, as well as ergodic theorems in triangular array mode.

  相似文献   

7.
We study the necessary and sufficient conditions for a finite ergodic Markov chain to converge in a finite number of transitions to its stationary distribution. Using this result, we describe the class of Markov chains which attain the stationary distribution in a finite number of steps, independent of the initial distribution. We then exhibit a queueing model that has a Markov chain embedded at the points of regeneration that falls within this class. Finally, we examine the class of continuous time Markov processes whose embedded Markov chain possesses the property of rapid convergence, and find that, in the case where the distribution of sojourn times is independent of the state, we can compute the distribution of the system at time t in the form of a simple closed expression.  相似文献   

8.
We study the question of geometric ergodicity in a class of Markov chains on the state space of non-negative integers for which, apart from a finite number of boundary rows and columns, the elements pjk of the one-step transition matrix are of the form c k-j where {c k} is a probability distribution on the set of integers. Such a process may be described as a general random walk on the non-negative integers with boundary conditions affecting transition probabilities into and out of a finite set of boundary states. The imbedded Markov chains of several non-Markovian queueing processes are special cases of this form. It is shown that there is an intimate connection between geometric ergodicity and geometric bounds on one of the tails of the distribution {c k}.This research was supported by the U.S. office of Naval Research Contract No. Nonr-855(09), and carried out while the author was a visitor in the Statistics department, University of North Carolina, Chapel Hill.  相似文献   

9.
This note considers continuous-time Markov chains whose state space consists of an irreducible class, C, and an absorbing state which is accessible from C. The purpose is to provide results on μ-invariant and μ-subinvariant measures where absorption occurs with probability less than one. In particular, the well-known premise that the μ-invariant measure, m, for the transition rates be finite is replaced by the more natural premise that m be finite with respect to the absorption probabilities. The relationship between μ-invariant measures and quasi-stationary distributions is discussed.  相似文献   

10.
The main aim of this paper is to examine the applicability of generalized inverses to a wide variety of problems in applied probability where a Markov chain is present either directly or indirectly through some form of imbedding. By characterizing all generalized inverses of IP, where P is the transition matrix of a finite irreducible discrete time Markov chain, we are able to obtain general procedures for finding stationary distributions, moments of the first passage time distributions, and asymptotic forms for the moments of the occupation-time random variables. It is shown that all known explicit methods for examining these problems can be expressed in this generalized inverse framework. More generally, in the context of a Markov renewal process setting the aforementioned problems are also examined using generalized inverses of IP. As a special case, Markov chains in continuous time are considered, and we show that the generalized inverse technique can be applied directly to the infinitesimal generator of the process, instead of to IP, where P is the transition matrix of the discrete time jump Markov chain.  相似文献   

11.
It is often possible to speed up the mixing of a Markov chain \(\{ X_{t} \}_{t \in \mathbb {N}}\) on a state space \(\Omega \) by lifting, that is, running a more efficient Markov chain \(\{ \widehat{X}_{t} \}_{t \in \mathbb {N}}\) on a larger state space \(\hat{\Omega } \supset \Omega \) that projects to \(\{ X_{t} \}_{t \in \mathbb {N}}\) in a certain sense. Chen et al. (Proceedings of the 31st annual ACM symposium on theory of computing. ACM, 1999) prove that for Markov chains on finite state spaces, the mixing time of any lift of a Markov chain is at least the square root of the mixing time of the original chain, up to a factor that depends on the stationary measure of \(\{X_t\}_{t \in \mathbb {N}}\). Unfortunately, this extra factor makes the bound in Chen et al. (1999) very loose for Markov chains on large state spaces and useless for Markov chains on continuous state spaces. In this paper, we develop an extension of the evolving set method that allows us to refine this extra factor and find bounds for Markov chains on continuous state spaces that are analogous to the bounds in Chen et al. (1999). These bounds also allow us to improve on the bounds in Chen et al. (1999) for some chains on finite state spaces.  相似文献   

12.
The result of principal interest established in this paper is that if A is an n × n singular irreducible M-matrix, then a large class of generalized inverses of A possesses the property that each of its elements has all its principal minors nonnegative. The class contains both the group and the Moore-Penrose generalized inverses of A. In an application of our results it is shown that the fundamental matrix of a continuous (in time) ergodic Markov chain on a finite state space has all its principal minors nonnegative.  相似文献   

13.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

14.
Consider a continuous time Markov chain with stationary transition probabilities. A function of the state is observed. A regular conditional probability distribution for the trajectory of the chain, given observations up to time t, is obtained. This distribution also corresponds to a Markov chain, but the conditional chain has nonstationary transition probabilities. In particular, computation of the conditional distribution of the state at time s is discussed. For s > t, we have prediction (extrapolation), while s < t corresponds to smoothing (interpolation). Equations for the conditional state distribution are given on matrix form and as recursive differential equations with varying s or t. These differential equations are closely related to Kolmogorov's forward and backward equations. Markov chains with one observed and one unobserved component are treated as a special case. In an example, the conditional distribution of the change-point is derived for a Poisson process with a changing intensity, given observations of the Poisson process.  相似文献   

15.
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M (≥ 2) states.  相似文献   

16.
A maximum out forest of a digraph is its spanning subgraph that consists of disjoint diverging trees and has the maximum possible number of arcs. For an arbitrary weighted digraph, we consider a matrix of specific weights of maximum out forests and demonstrate how this matrix can be used to get a graph-theoretic interpretation for the limiting probabilities of Markov chains. For a special (nonclassical) correspondence between Markov chains and weighted digraphs, the matrix of Cesáro limiting transition probabilities of any finite homogeneous Markov chain coincides with the normalized matrix of maximum out forests of the corresponding digraphs. This provides a finite (combinatorial) method to calculate the limiting probabilities of Markov chains and thus their stationary distributions. On the other hand, the Markov chain technique provides the proofs to some statements about digraphs.  相似文献   

17.
An eigentime identity is proved for transient symmetrizable Markov chains. For general Markov chains, if the trace of Green matrix is finite, then the expectation of first leap time is uniformly bounded, both of which are proved to be equivalent for single birth processes. For birth-death processes, the explicit formulas are presented. As an application, we give the bounds of exponential convergence rates of (sub-) Markov semigroup Pt from l to l.  相似文献   

18.
It is shown that under certain conditions, attractive invariant measures for iterated function systems (indeed for Markov processes on locally compact spaces) depend continuously on parameters of the system. We discuss a special class of iterated function systems, the homogeneous affine ones, for which an inverse problem is easily solved in principle by Fourier transform methods. We show that a probability measureμ onR n can be approximated by invariant measures for finite iterated function systems of this class if \(\hat \mu (t)/\hat \mu (a^T t)\) is positive definite for some nonzero contractive linear mapa:R n R n . Moments and cumulants are also discussed.  相似文献   

19.
In this paper circuit chains of superior order are defined as multiple Markov chains for which transition probabilities are expressed in terms of the weights of a finite class of circuits in a finite set, in connection with kinetic properties along the circuits. Conversely, it is proved that if we join any finite doubly infinite strictly stationary Markov chain of order r for which transitions hold cyclically with a second chain with the same transitions for the inverse time-sense, then they may be represented as circuit chains of order r.  相似文献   

20.
In this study, we model and analyse a production line with asynchronous part transfers, processing time variability, and cyclic scheduling in the same framework. We consider a production line with multiple parts and finite interstation buffers. The line produces a batch of n jobs repetitively using the same order of jobs in every batch. The processing time of a job on a station is a random variable and is assumed to have a phase-type distribution. Parts are transferred between the stations in an asynchronous manner. We first present a continuous time Markov chain model to analyse the performance of this system for a given sequence. A state-space representation of the model and the associated rate matrix are generated automatically. The steady state probabilities of the Markov chain are determined by using a recursive method that exploits the special structure of the rate matrix. The cycle time, the production rate, and the expected Work-In-Progress (WIP) inventory are used as the main performance measures. We then present an approximate procedure to determine the cyclic sequence that minimises the cycle time. We then investigate the effects of operating decisions, system structure, processing time variability, and their interaction in the same framework. Numerical results for the performance evaluation and scheduling of cyclic production lines are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号