首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 608 毫秒
1.
In previous work, the embedding problem is examined within the entire set of discrete-time Markov chains. However, for several phenomena, the states of a Markov model are ordered categories and the transition matrix is state-wise monotone. The present paper investigates the embedding problem for the specific subset of state-wise monotone Markov chains. We prove necessary conditions on the transition matrix of a discrete-time Markov chain with ordered states to be embeddable in a state-wise monotone Markov chain regarding time-intervals with length 0.5: A transition matrix with a square root within the set of state-wise monotone matrices has a trace at least equal to 1.  相似文献   

2.
Algebraic convergence for discrete-time ergodic markov chains   总被引:5,自引:0,他引:5  
This paper studies the l-ergodicity for discrete-time recurrent Markov chains.It proves that thel-order deviation matrix exists and is finite if and only if the chain is(l+2)-ergodic,and then the algebraicdecay rates of the n-step transition probability to the stationary distribution are obtained.The criteria forl-ergodicity are given in terms of existence of solution to an equation.The main results are,illustrated by someexamples.  相似文献   

3.
The study addresses the matrix operator equations of a special form which are used in the theory of Markov chains. Solving the operator equations with stochastic transition probability matrices of large finite or even countably infinite size reduces to the case of stochastic matrices of small size. In particular, the case of ternary chains is considered in detail. A Markov model for crack growth in a composite serves as an example of application.  相似文献   

4.
We present a lower and an upper bound for the second smallest eigenvalue of Laplacian matrices in terms of the averaged minimal cut of weighted graphs. This is used to obtain an upper bound for the real parts of the non-maximal eigenvalues of irreducible nonnegative matrices. The result can be applied to Markov chains.  相似文献   

5.
We investigate deviation matrix for discrete-time GI/M/1-type Markov chains in terms of the matrix-analytic method, and revisit the link between deviation matrix and the asymptotic variance. Parallel results are obtained for continuous-time GI/M/1-type Markov chains based on the technique of uniformization. We conclude with A. B. Clarke's tandem queue as an illustrative example, and compute the asymptotic variance for the queue length for this model.  相似文献   

6.
We consider an accessibility index for the states of a discrete-time, ergodic, homogeneous Markov chain on a finite state space; this index is naturally associated with the random walk centrality introduced by Noh and Reiger (2004) for a random walk on a connected graph. We observe that the vector of accessibility indices provides a partition of Kemeny’s constant for the Markov chain. We provide three characterizations of this accessibility index: one in terms of the first return time to the state in question, and two in terms of the transition matrix associated with the Markov chain. Several bounds are provided on the accessibility index in terms of the eigenvalues of the transition matrix and the stationary vector, and the bounds are shown to be tight. The behaviour of the accessibility index under perturbation of the transition matrix is investigated, and examples exhibiting some counter-intuitive behaviour are presented. Finally, we characterize the situation in which the accessibility indices for all states coincide.  相似文献   

7.
In this paper, we use the Markov chain censoring technique to study infinite state Markov chains whose transition matrices possess block-repeating entries. We demonstrate that a number of important probabilistic measures are invariant under censoring. Informally speaking, these measures involve first passage times or expected numbers of visits to certain levels where other levels are taboo; they are closely related to the so-called fundamental matrix of the Markov chain which is also studied here. Factorization theorems for the characteristic equation of the blocks of the transition matrix are obtained. Necessary and sufficient conditions are derived for such a Markov chain to be positive recurrent, null recurrent, or transient based either on spectral analysis, or on a property of the fundamental matrix. Explicit expressions are obtained for key probabilistic measures, including the stationary probability vector and the fundamental matrix, which could be potentially used to develop various recursive algorithms for computing these measures.  相似文献   

8.

The paper is devoted to studies of regularly and singularly perturbed Markov chains with damping component. In such models, a matrix of transition probabilities is regularised by adding a special damping matrix multiplied by a small damping (perturbation) parameter ε. We perform a detailed perturbation analysis for such Markov chains, particularly, give effective upper bounds for the rate of approximation for stationary distributions of unperturbed Markov chains by stationary distributions of perturbed Markov chains with regularised matrices of transition probabilities, asymptotic expansions for approximating stationary distributions with respect to damping parameter, explicit coupling type upper bounds for the rate of convergence in ergodic theorems for n-step transition probabilities, as well as ergodic theorems in triangular array mode.

  相似文献   

9.
Given a sequence of transition matrices for a nonstationary Markov chain, a matrix whose product on the right of a transition matrix yields the next transition matrix is called a causative matrix. A causative matrix is strongly causative if successive products continue to yield stochastic matrices. This paper presents necessary and sufficient conditions for a matrix to be causative and strongly causative with respect to an invertible transition matrix, by considering the causative matrix as a linear transformation on the rows of the transition matrix.  相似文献   

10.
??An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

11.
An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

12.
For continuous-time Markov chains, we provide criteria for non-ergodicity, non-algebraic ergodicity, non-exponential ergodicity, and non-strong ergodicity. For discrete-time Markov chains, criteria for non-ergodicity, non-algebraic ergodicity, and non-strong ergodicity are given. Our criteria are in terms of the existence of solutions to inequalities involving the Q-matrix (or transition matrix P in time-discrete case) of the chain. Meanwhile, these practical criteria are applied to some examples, including a special class of single birth processes and several multi-dimensional models.  相似文献   

13.
Kingman and Williams [6] showed that a pattern of positive elements can occur in a transition matrix of a finite state, nonhomogeneous Markov chain if and only if it may be expressed as a finite product of reflexive and transitive patterns. In this paper we solve a similar problem for doubly stochastic chains. We prove that a pattern of positive elements can occur in a transition matrix of a doubly stochastic Markov chain if and only if it may be expressed as a finite product of reflexive, transitive, and symmetric patterns. We provide an algorithm for determining whether a given pattern may be expressed as a finite product of reflexive, transitive, and symmetric patterns. This result has implications for the embedding problem for doubly stochastic Markov chains. We also give the application of the obtained characterization to the chain majorization.  相似文献   

14.
谭尚旺  张德龙 《数学杂志》2002,22(4):475-480
设A是n阶竞赛矩阵,k是非负整数。文[3]刻划了恰好有三个不同特征值的n阶竞赛矩阵,文[4]刻划了恰好有四个不同特征值并且0作为一个一重特征值的n阶竞赛矩阵。在这篇文章中我们主要研究了两个问题:(1)讨论当k是A的特征值时A的性质。(2)刻划恰好有四个不同特征值并且k作为一个一重特征值的全部n阶竞赛矩阵。  相似文献   

15.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Let \begin{align*}{\mathcal T}\end{align*}n be the compact convex set of tridiagonal doubly stochastic matrices. These arise naturally in probability problems as birth and death chains with a uniform stationary distribution. We study ‘typical’ matrices T∈ \begin{align*}{\mathcal T}\end{align*}n chosen uniformly at random in the set \begin{align*}{\mathcal T}\end{align*}n. A simple algorithm is presented to allow direct sampling from the uniform distribution on \begin{align*}{\mathcal T}\end{align*}n. Using this algorithm, the elements above the diagonal in T are shown to form a Markov chain. For large n, the limiting Markov chain is reversible and explicitly diagonalizable with transformed Jacobi polynomials as eigenfunctions. These results are used to study the limiting behavior of such typical birth and death chains, including their eigenvalues and mixing times. The results on a uniform random tridiagonal doubly stochastic matrices are related to the distribution of alternating permutations chosen uniformly at random.© 2012 Wiley Periodicals, Inc. Random Struct. Alg., 42, 403–437, 2013  相似文献   

17.
In many applications of absorbing Markov chains, solution of the problem at hand involves finding the mean time to absorption. Moreover, in almost all real world applications of Markov chains, accurate estimation of the elements of the probability matrix is a major concern. This paper develops a technique that provides close estimates of the mean number of stages before absorption with only the row sums of the transition matrix of transient states.  相似文献   

18.
Ergodic degrees for continuous-time Markov chains   总被引:3,自引:0,他引:3  
This paper studies the existence of the higher orders deviation matrices for continuous time Markov chains by the moments for the hitting times. An estimate of the polynomial convergence rates for the transition matrix to the stationary measure is obtained. Finally, the explicit formulas for birth-death processes are presented.  相似文献   

19.
Harris recurrence is a widely used tool in the analysis of queueing systems. For discrete-time Harris chains, such systems automatically exhibit wide-sense regenerative structure, so that renewal theory can be applied to questions related to convergence of the transition probabilities to the equilibrium distribution. By contrast, in continuous time, the question of whether all Harris recurrent Markov processes are automatically wide-sense regenerative is an open problem. This paper reviews the key structural results related to regeneration for discrete-time chains and continuous time Markov processes, and describes the key remaining open problem in this subject area.  相似文献   

20.
Given a row-stochastic matrix describing pairwise similarities between data objects, spectral clustering makes use of the eigenvectors of this matrix to perform dimensionality reduction for clustering in fewer dimensions. One example from this class of algorithms is the Robust Perron Cluster Analysis (PCCA+), which delivers a fuzzy clustering. Originally developed for clustering the state space of Markov chains, the method became popular as a versatile tool for general data classification problems. The robustness of PCCA+, however, cannot be explained by previous perturbation results, because the matrices in typical applications do not comply with the two main requirements: reversibility and nearly decomposability. We therefore demonstrate in this paper that PCCA+ always delivers an optimal fuzzy clustering for nearly uncoupled, not necessarily reversible, Markov chains with transition states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号