首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Markov chains are often used as mathematical models of natural phenomena, with transition probabilities defined in terms of parameters that are of interest in the scientific question at hand. Sensitivity analysis is an important way to quantify the effects of changes in these parameters on the behavior of the chain. Many properties of Markov chains can be written as simple matrix expressions, and hence matrix calculus is a powerful approach to sensitivity analysis. Using matrix calculus, we derive the sensitivity and elasticity of a variety of properties of absorbing and ergodic finite-state chains. For absorbing chains, we present the sensitivities of the moments of the number of visits to each transient state, the moments of the time to absorption, the mean number of states visited before absorption, the quasistationary distribution, and the probabilities of absorption in each of several absorbing states. For ergodic chains, we present the sensitivity of the stationary distribution, the mean first passage time matrix, the fundamental matrix, and the Kemeny constant. We include two examples of application of the results to demographic and ecological problems.  相似文献   

2.
??An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

3.
An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

4.
5.
In this paper we model the run time behavior of GAs using higher cardinality representations as Markov chains, define the states of the Markov Chain and derive the transition probabilities of the corresponding transition matrix. We analyze the behavior of this chain and obtain bounds on its convergence rate and bounds on the runtime complexity of the GA. We further investigate the effects of using binary versus higher cardinality representation of a search space.  相似文献   

6.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

7.
We study the necessary and sufficient conditions for a finite ergodic Markov chain to converge in a finite number of transitions to its stationary distribution. Using this result, we describe the class of Markov chains which attain the stationary distribution in a finite number of steps, independent of the initial distribution. We then exhibit a queueing model that has a Markov chain embedded at the points of regeneration that falls within this class. Finally, we examine the class of continuous time Markov processes whose embedded Markov chain possesses the property of rapid convergence, and find that, in the case where the distribution of sojourn times is independent of the state, we can compute the distribution of the system at time t in the form of a simple closed expression.  相似文献   

8.
We consider convergence of Markov chains with uncertain parameters, known as imprecise Markov chains, which contain an absorbing state. We prove that under conditioning on non-absorption the imprecise conditional probabilities converge independently of the initial imprecise probability distribution if some regularity conditions are assumed. This is a generalisation of a known result from the classical theory of Markov chains by Darroch and Seneta [6].  相似文献   

9.
In this paper we extend a result which holds for the class of networks of quasireversible nodes to a class of networks constructed by coupling Markov chains. We begin with a network in which the transition rates governing the stochastic behaviour of the individual nodes depend only on the state of the node. Assuming that the network has an invariant measure, we construct another network with transition rates at each node depending on the state of the entire network, and obtain its invariant measure.  相似文献   

10.
《随机分析与应用》2013,31(5):1175-1207
Abstract

We consider a particle evolving according to a Markov motion in an absorbing medium. We analyze the long term behavior of the time at which the particle is killed and the distribution of the particle conditional upon survival. Under given regularity conditions, these quantities are characterized by the limiting distribution and the Lyapunov exponent of a nonlinear Feynman-Kac operator. We propose to approximate numerically this distribution and this exponent based on various interacting particle system interpretations of the Feynman-Kac operator. We study the properties of the resulting estimates.  相似文献   

11.
In many applications of absorbing Markov chains, solution of the problem at hand involves finding the mean time to absorption. Moreover, in almost all real world applications of Markov chains, accurate estimation of the elements of the probability matrix is a major concern. This paper develops a technique that provides close estimates of the mean number of stages before absorption with only the row sums of the transition matrix of transient states.  相似文献   

12.
We study a discrete-time single-server queue where batches of messages arrive. Each message consists of a geometrically distributed number of packets which do not arrive at the same instant and which require a time unit as service time. We consider the cases of constant spacing and geometrically distributed (random) spacing between consecutive packets of a message. For the probability generating function of the stationary distribution of the embedded Markov chain we derive in both cases a functional equation which involves a boundary function. The stationary mean number of packets in the system can be computed via this boundary function without solving the functional equation. In case of constant (random) spacing the boundary function can be determined by solving a finite-dimensional (an infinite-dimensional) system of linear equations numerically. For Poisson- and Bernoulli-distributed arrivals of messages numerical results are presented. Further, limiting results are derived.  相似文献   

13.
We consider discrete-time single-server queues fed by independent, heterogeneous sources with geometrically distributed idle periods. While being active, each source generates some cells depending on the state of the underlying Markov chain. We first derive a general and explicit formula for the mean buffer contents in steady state when the underlying Markov chain of each source has finite states. Next we show the applicability of the general formula to queues fed by independent sources with infinite-state underlying Markov chains and discrete phase-type active periods. We then provide explicit formulas for the mean buffer contents in queues with Markovian autoregressive sources and greedy sources. Further we study two limiting cases in general settings, one is that the lengths of active periods of each source are governed by an infinite-state absorbing Markov chain, and the other is the model obtained by the limit such that the number of sources goes to infinity under an appropriate normalizing condition. As you will see, the latter limit leads to a queue with (generalized) M/G/∞ input sources. We provide sufficient conditions under which the general formula is applicable to these limiting cases.AMS subject classification: 60K25, 60K37, 60J10This revised version was published online in June 2005 with corrected coverdate  相似文献   

14.
Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun [11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices.  相似文献   

15.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

16.
Suppose we observe a stationary Markov chain with unknown transition distribution. The empirical estimator for the expectation of a function of two successive observations is known to be efficient. For reversible Markov chains, an appropriate symmetrization is efficient. For functions of more than two arguments, these estimators cease to be efficient. We determine the influence function of efficient estimators of expectations of functions of several observations, both for completely unknown and for reversible Markov chains. We construct simple efficient estimators in both cases.  相似文献   

17.
We have previously used Markov models to describe movements of patients between hospital states; these may be actual or virtual and described by a phase-type distribution. Here we extend this approach to a Markov reward model for a healthcare system with Poisson admissions and an absorbing state, typically death. The distribution of costs is evaluated for any time and expressions derived for the mean and variances of costs. The average cost at any time is then determined for two scenarios: the Therapeutic and Prosthetic models, respectively. This example is used to illustrate the idea that keeping acute patients longer in hospital to ensure fitness for discharge, may reduce costs by decreasing the number of patients that become long-stay. In addition we develop a Markov Reward Model for a healthcare system including states, where the patient is in hospital, and states, where the patient is in the community. In each case, the length of stay is described by a phase-type distribution, thus enabling the representation of durations and costs in each phase within a Markov framework. The model can be used to determine costs for the entire system thus facilitating a systems approach to the planning of healthcare and a holistic approach to costing. Such models help us to assess the complex relationship between hospital and community care.  相似文献   

18.
This paper is concerned with the circumstances under which a discrete-time absorbing Markov chain has a quasi-stationary distribution. We showed in a previous paper that a pure birth-death process with an absorbing bottom state has a quasi-stationary distribution—actually an infinite family of quasi-stationary distributions— if and only if absorption is certain and the chain is geometrically transient. If we widen the setting by allowing absorption in one step (killing) from any state, the two conditions are still necessary, but no longer sufficient. We show that the birth–death-type of behaviour prevails as long as the number of states in which killing can occur is finite. But if there are infinitely many such states, and if the chain is geometrically transient and absorption certain, then there may be 0, 1, or infinitely many quasi-stationary distributions. Examples of each type of behaviour are presented. We also survey and supplement the theory of quasi-stationary distributions for discrete-time Markov chains in general.   相似文献   

19.
The finite Markov Chain Imbedding technique has been successfully applied in various fields for finding the exact or approximate distributions of runs and patterns under independent and identically distributed or Markov dependent trials. In this paper, we derive a new recursive equation for distribution of scan statistic using the finite Markov chain imbedding technique. We also address the problem of obtaining transition probabilities of the imbedded Markov chain by introducing a notion termed Double Finite Markov Chain Imbedding where transition probabilities are obtained by using the finite Markov chain imbedding technique again. Applications for random permutation model in chemistry and coupon collector’s problem are given to illustrate our idea.  相似文献   

20.
We study the limit behaviour of a nonlinear differential equation whose solution is a superadditive generalisation of a stochastic matrix, prove convergence, and provide necessary and sufficient conditions for ergodicity. In the linear case, the solution of our differential equation is equal to the matrix exponential of an intensity matrix and can then be interpreted as the transition operator of a homogeneous continuous-time Markov chain. Similarly, in the generalised nonlinear case that we consider, the solution can be interpreted as the lower transition operator of a specific set of non-homogeneous continuous-time Markov chains, called an imprecise continuous-time Markov chain. In this context, our convergence result shows that for a fixed initial state, an imprecise continuous-time Markov chain always converges to a limiting distribution, and our ergodicity result provides a necessary and sufficient condition for this limiting distribution to be independent of the initial state.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号