首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is concerned with the circumstances under which a discrete-time absorbing Markov chain has a quasi-stationary distribution. We showed in a previous paper that a pure birth-death process with an absorbing bottom state has a quasi-stationary distribution—actually an infinite family of quasi-stationary distributions— if and only if absorption is certain and the chain is geometrically transient. If we widen the setting by allowing absorption in one step (killing) from any state, the two conditions are still necessary, but no longer sufficient. We show that the birth–death-type of behaviour prevails as long as the number of states in which killing can occur is finite. But if there are infinitely many such states, and if the chain is geometrically transient and absorption certain, then there may be 0, 1, or infinitely many quasi-stationary distributions. Examples of each type of behaviour are presented. We also survey and supplement the theory of quasi-stationary distributions for discrete-time Markov chains in general.   相似文献   

2.
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M (≥ 2) states.  相似文献   

3.
Data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo algorithm. In this paper, an alternative to DA algorithm is proposed. It is shown that the modified Markov chain is always more efficient than DA in the sense that the asymptotic variance in the central limit theorem under the alternative chain is no larger than that under DA. The modification is based on Peskun’s (Biometrika 60:607–612, 1973) result which shows that asymptotic variance of time average estimators based on a finite state space reversible Markov chain does not increase if the Markov chain is altered by increasing all off-diagonal probabilities. In the special case when the state space or the augmentation space of the DA chain is finite, it is shown that Liu’s (Biometrika 83:681–682, 1996) modified sampler can be used to improve upon the DA algorithm. Two illustrative examples, namely the beta-binomial distribution, and a model for analyzing rank data are used to show the gains in efficiency by the proposed algorithms.  相似文献   

4.
We consider a single buffer fluid system in which the instantaneous rate of change of the fluid is determined by the current state of a background stochastic process called “environment”. When the fluid level hits zero, it instantaneously jumps to a predetermined positive level q. At the jump epoch the environment state can undergo an instantaneous transition. Between two consecutive jumps of the fluid level the environment process behaves like a continuous time Markov chain (CTMC) with finite state space. We develop methods to compute the limiting distribution of the bivariate process (buffer level, environment state). We also study a special case where the environment state does not change when the fluid level jumps. In this case we present a stochastic decomposition property which says that in steady state the buffer content is the sum of two independent random variables: one is uniform over [0,q], and the other is the steady-state buffer content in a standard fluid model without jumps.   相似文献   

5.
In this paper, we show that the discrete GI/G/1 system with Bernoulli retrials can be analyzed as a level-dependent QBD process with infinite blocks; these blocks are finite when both the inter-arrival and service times have finite supports. The resulting QBD has a special structure which makes it convenient to analyze by the Matrix-analytic method (MAM). By representing both the inter-arrival and service times using a Markov chain based approach we are able to use the tools for phase type distributions in our model. Secondly, the resulting phase type distributions have additional structures which we exploit in the development of the algorithmic approach. The final working model approximates the level-dependent Markov chain with a level independent Markov chain that has a large set of boundaries. This allows us to use the modified matrix-geometric method to analyze the problem. A key task is selecting the level at which this level independence should begin. A procedure for this selection process is presented and then the distribution of the number of jobs in the orbit is obtained. Numerical examples are presented to demonstrate how this method works.  相似文献   

6.
We consider a Markov chain in continuous time with one absorbing state and a finite set S of transient states. When S is irreducible the limiting distribution of the chain as t, conditional on survival up to time t, is known to equal the (unique) quasi-stationary distribution of the chain. We address the problem of generalizing this result to a setting in which S may be reducible, and show that it remains valid if the eigenvalue with maximal real part of the generator of the (sub)Markov chain on S has geometric (but not, necessarily, algebraic) multiplicity one. The result is then applied to pure death processes and, more generally, to quasi-death processes. We also show that the result holds true even when the geometric multiplicity is larger than one, provided the irreducible subsets of S satisfy an accessibility constraint. A key role in the analysis is played by some classic results on M-matrices.  相似文献   

7.
In this paper, an absorbing finite Markov chain model is developed to facilitate planning in a DBA Program in a Graduate School in Australia. The short- and long-term forecasting facilities offered by the model are used to determine the expected numbers in the DBA Program. The Markov model is also used to determine other critical attributes of the Program, such as expected success rate for candidates, expected first passage times prior to absorption per se and also prior to withdrawal and graduation specifically, all facilitating a status quo analysis. The absorbing state-specific first passage times are arrived at using a relatively simplified approach. This paper also illustrates how the model can be used to competitively benchmark these status quo attributes. The expected number of doctoral candidates in each ‘state’ within the Program (in both the short- and long-term contexts) can be used to estimate, among a range of important model attributes, the expected revenue stream and expected supervisor load in the DBA Program. From this information changes can be made to the Program where required.  相似文献   

8.
We study a unichain Markov decision process i.e. a controlled Markov process whose state process under a stationary policy is an ergodic Markov chain. Here the state and action spaces are assumed to be either finite or countable. When the state process is uniformly ergodic and the immediate cost is bounded then a policy that minimizes the long-term expected average cost also has an nth stage sample path cost that with probability one is asymptotically less than the nth stage sample path cost under any other non-optimal stationary policy with a larger expected average cost. This is a strengthening in the Markov model case of the a.s. asymptotically optimal property frequently discussed in the literature.  相似文献   

9.
Consider a process in which different events occur, with random inter-occurrence times. In Markov renewal processes as well as in semi-Markov processes, the sequence of events is a Markov chain and the waiting distributions depend only on the types of the last and the next event. Suppose that the state-space is finite and that the process started far in the past, achieving stationary. Weibull distributions are proposed for the waiting times and their parameters are estimated jointly with the transition probabilities through maximum likelihood, when one or several realizations of the process are observed over finite windows. The model is illustrated with data of earthquakes of three types of severity that occurred in Turkey during the 20th century.AMS 2000 Subject Classification: 60K20  相似文献   

10.
1.IntroductionInreliabilitytheory,inordertocalculatethefailurefrequencyofarepairablesystem,Shily]firstintroducedandstudiedthetransitionfrequencybetweentwodisjointstatesetsforafiniteMarkovchainandavectorMarkovprocesswithfinitediscretestatespaceandobtainedageneralformulaoftransitionfrequency.Then,ontheconditionthatthegeneratormatrixofMarkovchainisuniformlybounded,Shi[8'9]againprovedthetransitionfrequencyformulaandobtainedthreeotherusefulformulas.Obviously,thepoint(orcalledcounting)processofsta…  相似文献   

11.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

12.
Stochastic networks with time varying arrival and service rates and routing structure are studied. Time variations are governed by, in addition to the state of the system, two independent finite state Markov processes X and Y. The transition times of X are significantly smaller than typical inter-arrival and processing times whereas the reverse is true for the Markov process Y. By introducing a suitable scaling parameter one can model such a system using a hierarchy of time scales. Diffusion approximations for such multiscale systems are established under a suitable heavy traffic condition. In particular, it is shown that, under certain conditions, properly normalized buffer content processes converge weakly to a reflected diffusion. The drift and diffusion coefficients of this limit model are functions of the state process, the invariant distribution of X, and a finite state Markov process which is independent of the driving Brownian motion.  相似文献   

13.
Qi-Ming He 《Queueing Systems》2005,49(3-4):363-403
In this paper, we study a discrete time queueing system with multiple types of customers and a first-come-first-served (FCFS) service discipline. Customers arrive according to a semi-Markov arrival process and the service times of individual customers have PH-distributions. A GI/M/1 type Markov chain for a generalized age process of batches of customers is introduced. The steady state distribution of the GI/M/1 type Markov chain is found explicitly and, consequently, the steady state distributions of the age of the batch in service, the total workload in the system, waiting times, and sojourn times of different batches and different types of customers are obtained. We show that the generalized age process and a generalized total workload process have the same steady state distribution. We prove that the waiting times and sojourn times have PH-distributions and find matrix representations of those PH-distributions. When the arrival process is a Markov arrival process with marked transitions, we construct a QBD process for the age process and the total workload process. The steady state distributions of the waiting times and the sojourn times, both at the batch level and the customer level, are obtained from the steady state distribution of the QBD process. A number of numerical examples are presented to gain insight into the waiting processes of different types of customers.AMS subject classification: 60K25, 60J10This revised version was published online in June 2005 with corrected coverdate  相似文献   

14.
The ergodic theory of Markov chains in random environments   总被引:70,自引:0,他引:70  
Summary A general formulation of the stochastic model for a Markov chain in a random environment is given, including an analysis of the dependence relations between the environmental process and the controlled Markov chain, in particular the problem of feedback. Assuming stationary environments, the ergodic theory of Markov processes is applied to give conditions for the existence of finite invariant measure (equilibrium distributions) and to obtain ergodic theorems, which provide results on convergence of products of random stochastic matrices. Coupling theory is used to obtain results on direct convergence of these products and the structure of the tail -field. State properties including classification and communication properties are discussed.  相似文献   

15.
In this paper circuit chains of superior order are defined as multiple Markov chains for which transition probabilities are expressed in terms of the weights of a finite class of circuits in a finite set, in connection with kinetic properties along the circuits. Conversely, it is proved that if we join any finite doubly infinite strictly stationary Markov chain of order r for which transitions hold cyclically with a second chain with the same transitions for the inverse time-sense, then they may be represented as circuit chains of order r.  相似文献   

16.
This paper extends the model and analysis in that of Vandaele and Vanmaele [Insurance: Mathematics and Economics, 2008, 42: 1128–1137]. We assume that parameters of the Lévy process which models the dynamic of risky asset in the financial market depend on a finite state Markov chain. The state of the Markov chain can be interpreted as the state of the economy. Under the regime switching Lévy model, we obtain the locally risk-minimizing hedging strategies for some unit-linked life insurance products, including both the pure endowment policy and the term insurance contract.  相似文献   

17.
We study a Markov chain on generating n-tuples of a fixed group which arises in algorithms for manipulating finite groups. The main tools are comparison of two Markov chains on different but related state spaces and combinatorics of random paths. The results involve group theoretical parameters such as the size of minimal generating sets, the number of distinct generating k-tuples for different k's and the maximal diameter of the group. Oblatum 6-VIII-1996 & 6-XI-97  相似文献   

18.
We study a PH/G/1 queue in which the arrival process and the service times depend on the state of an underlying Markov chain J(t) on a countable state spaceE. We derive the busy period process, waiting time and idle time of this queueing system. We also study the Markov modulated EK/G/1 queueing system as a special case.  相似文献   

19.
We consider discrete-time single-server queues fed by independent, heterogeneous sources with geometrically distributed idle periods. While being active, each source generates some cells depending on the state of the underlying Markov chain. We first derive a general and explicit formula for the mean buffer contents in steady state when the underlying Markov chain of each source has finite states. Next we show the applicability of the general formula to queues fed by independent sources with infinite-state underlying Markov chains and discrete phase-type active periods. We then provide explicit formulas for the mean buffer contents in queues with Markovian autoregressive sources and greedy sources. Further we study two limiting cases in general settings, one is that the lengths of active periods of each source are governed by an infinite-state absorbing Markov chain, and the other is the model obtained by the limit such that the number of sources goes to infinity under an appropriate normalizing condition. As you will see, the latter limit leads to a queue with (generalized) M/G/∞ input sources. We provide sufficient conditions under which the general formula is applicable to these limiting cases.AMS subject classification: 60K25, 60K37, 60J10This revised version was published online in June 2005 with corrected coverdate  相似文献   

20.
We consider the reflection of an additive process with negative drift controlled by a Markov chain on a finite state space. We determine the tail behaviour of the distribution of the maximum over a regenerative cycle in the case with subexponential increments. Based on this, the asymptotic distribution of the running maximum is derived. Applications of the results to Markov modulated single server queueing systems are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号