首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 500 毫秒
1.
An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

2.
Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun [11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices.  相似文献   

3.

The paper is devoted to studies of regularly and singularly perturbed Markov chains with damping component. In such models, a matrix of transition probabilities is regularised by adding a special damping matrix multiplied by a small damping (perturbation) parameter ε. We perform a detailed perturbation analysis for such Markov chains, particularly, give effective upper bounds for the rate of approximation for stationary distributions of unperturbed Markov chains by stationary distributions of perturbed Markov chains with regularised matrices of transition probabilities, asymptotic expansions for approximating stationary distributions with respect to damping parameter, explicit coupling type upper bounds for the rate of convergence in ergodic theorems for n-step transition probabilities, as well as ergodic theorems in triangular array mode.

  相似文献   

4.
Markov chains are often used as mathematical models of natural phenomena, with transition probabilities defined in terms of parameters that are of interest in the scientific question at hand. Sensitivity analysis is an important way to quantify the effects of changes in these parameters on the behavior of the chain. Many properties of Markov chains can be written as simple matrix expressions, and hence matrix calculus is a powerful approach to sensitivity analysis. Using matrix calculus, we derive the sensitivity and elasticity of a variety of properties of absorbing and ergodic finite-state chains. For absorbing chains, we present the sensitivities of the moments of the number of visits to each transient state, the moments of the time to absorption, the mean number of states visited before absorption, the quasistationary distribution, and the probabilities of absorption in each of several absorbing states. For ergodic chains, we present the sensitivity of the stationary distribution, the mean first passage time matrix, the fundamental matrix, and the Kemeny constant. We include two examples of application of the results to demographic and ecological problems.  相似文献   

5.
In previous work, the embedding problem is examined within the entire set of discrete-time Markov chains. However, for several phenomena, the states of a Markov model are ordered categories and the transition matrix is state-wise monotone. The present paper investigates the embedding problem for the specific subset of state-wise monotone Markov chains. We prove necessary conditions on the transition matrix of a discrete-time Markov chain with ordered states to be embeddable in a state-wise monotone Markov chain regarding time-intervals with length 0.5: A transition matrix with a square root within the set of state-wise monotone matrices has a trace at least equal to 1.  相似文献   

6.
Within the set of discrete-time Markov chains, a Markov chain is embeddable in case its transition matrix has at least one root that is a stochastic matrix. The present paper examines the embedding problem for discrete-time Markov chains with three states and with real eigenvalues. Sufficient embedding conditions are proved for diagonalizable transition matrices as well as for non-diagonalizable transition matrices and for all possible configurations regarding the sign of the eigenvalues. The embedding conditions are formulated in terms of the projections and the spectral decomposition of the transition matrix.  相似文献   

7.
We consider the behavior of a stochastic system composed of several identically distributed, but non independent, discrete-time absorbing Markov chains competing at each instant for a transition. The competition consists in determining at each instant, using a given probability distribution, the only Markov chain allowed to make a transition. We analyze the first time at which one of the Markov chains reaches its absorbing state. We obtain its distribution and its expectation and we propose an algorithm to compute these quantities. We also exhibit the asymptotic behavior of the system when the number of Markov chains goes to infinity. Actually, this problem comes from the analysis of large-scale distributed systems and we show how our results apply to this domain.  相似文献   

8.
In this paper we suggest a new successive approximation method to compute the optimal discounted reward for finite state and action, discrete time, discounted Markov decision chains. The method is based on a block partitioning of the (stochastic) matrices corresponding to the stationary policies. The method is particularly attractive when the transition matrices are jointly nearly decomposable or nearly completely decomposable.  相似文献   

9.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

10.
In many applications of absorbing Markov chains, solution of the problem at hand involves finding the mean time to absorption. Moreover, in almost all real world applications of Markov chains, accurate estimation of the elements of the probability matrix is a major concern. This paper develops a technique that provides close estimates of the mean number of stages before absorption with only the row sums of the transition matrix of transient states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号