首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recursive equations are derived for the conditional distribution of the state of a Markov chain, given observations of a function of the state. Mainly continuous time chains are considered. The equations for the conditional distribution are given in matrix form and in differential equation form. The conditional distribution itself forms a Markov process. Special cases considered are doubly stochastic Poisson processes with a Markovian intensity, Markov chains with a random time, and Markovian approximations of semi-Markov processes. Further the results are used to compute the Radon-Nikodym derivative for two probability measures for a Markov chain, when a function of the state is observed.  相似文献   

2.
We propose a method to abstract a given stochastic Petri net (SPN). We shall show that the reachability tree of the given SPN is isomorphic to a Markov renewal process. Then, the given SPN is transformed to a state transition system (STS) and the STS is reduced. The reduction of states on STS corresponds to a fusion of series transitions on the SPN. The reduced STS is again transformed to an abstract SPN. We show that it is helpful to use the notion of the conditional firstpassage time from a certain state to the others on the STS to reduce nonessential states, thus places and transitions on the given SPN. Mass functions, that is, the distribution functions of conditional first-passage time between preserved states on the reduced MRP, preserve firing probabilities of fused transitions. Firing probability of the preserved transition also preserves the stochastic properties of the fused transitions.  相似文献   

3.
The concept of a limiting conditional age distribution of a continuous time Markov process whose state space is the set of non-negative integers and for which {0} is absorbing is defined as the weak limit as t→∞ of the last time before t an associated “return” Markov process exited from {0} conditional on the state, j, of this process at t. It is shown that this limit exists and is non-defective if the return process is ρ-recurrent and satisfies the strong ratio limit property. As a preliminary to the proof of the main results some general results are established on the representation of the ρ-invariant measure and function of a Markov process. The conditions of the main results are shown to be satisfied by the return process constructed from a Markov branching process and by birth and death processes. Finally, a number of limit theorems for the limiting age as j→∞ are given.  相似文献   

4.
A measure of the “mixing time” or “time to stationarity” in a finite irreducible discrete time Markov chain is considered. The statistic , where {πj} is the stationary distribution and mij is the mean first passage time from state i to state j of the Markov chain, is shown to be independent of the initial state i (so that ηi = η for all i), is minimal in the case of a periodic chain, yet can be arbitrarily large in a variety of situations. An application considering the effects perturbations of the transition probabilities have on the stationary distributions of Markov chains leads to a new bound, involving η, for the 1-norm of the difference between the stationary probability vectors of the original and the perturbed chain. When η is large the stationary distribution of the Markov chain is very sensitive to perturbations of the transition probabilities.  相似文献   

5.
We consider a Markov chain in continuous time with one absorbing state and a finite set S of transient states. When S is irreducible the limiting distribution of the chain as t, conditional on survival up to time t, is known to equal the (unique) quasi-stationary distribution of the chain. We address the problem of generalizing this result to a setting in which S may be reducible, and show that it remains valid if the eigenvalue with maximal real part of the generator of the (sub)Markov chain on S has geometric (but not, necessarily, algebraic) multiplicity one. The result is then applied to pure death processes and, more generally, to quasi-death processes. We also show that the result holds true even when the geometric multiplicity is larger than one, provided the irreducible subsets of S satisfy an accessibility constraint. A key role in the analysis is played by some classic results on M-matrices.  相似文献   

6.
Abramov  V.  Liptser  R. 《Queueing Systems》2004,46(3-4):353-361
In this paper, sufficient conditions are given for the existence of limiting distribution of a nonhomogeneous countable Markov chain with time-dependent transition intensity matrix. The method of proof exploits the fact that if the distribution of random process Q=(Q t ) t≥0 is absolutely continuous with respect to the distribution of ergodic random process Q°=(Q° t ) t≥0, then $Q_t \xrightarrow[{t \to \infty }]{{law}}\pi $ where π is the invariant measure of Q°. We apply this result for asymptotic analysis, as t→∞, of a nonhomogeneous countable Markov chain which shares limiting distribution with an ergodic birth-and-death process.  相似文献   

7.
We consider a discrete-time Markov chain on the non-negative integers with drift to infinity and study the limiting behavior of the state probabilities conditioned on not having left state 0 for the last time. Using a transformation, we obtain a dual Markov chain with an absorbing state such that absorption occurs with probability 1. We prove that the state probabilities of the original chain conditioned on not having left state 0 for the last time are equal to the state probabilities of its dual conditioned on non-absorption. This allows us to establish the simultaneous existence, and then equivalence, of their limiting conditional distributions. Although a limiting conditional distribution for the dual chain is always a quasi-stationary distribution in the usual sense, a similar statement is not possible for the original chain.  相似文献   

8.
This paper is a continuation of “Diffusions conditionelles, I.” If (xt, zt) is a two-component diffusion process, it is shown that under appropriate conditions, the process xt (t ? T), given (zs, s ? T) is a nonhomogeneous strong Markov process, whose generator is explicitly found by using the theory of stochastic flows. The filtering equation is reduced to an ordinary partial differential equation.  相似文献   

9.
Starting from a real-valued Markov chain X0,X1,…,Xn with stationary transition probabilities, a random element {Y(t);t[0, 1]} of the function space D[0, 1] is constructed by letting Y(k/n)=Xk, k= 0,1,…,n, and assuming Y (t) constant in between. Sample tightness criteria for sequences {Y(t);t[0,1]};n of such random elements in D[0, 1] are then given in terms of the one-step transition probabilities of the underlying Markov chains. Applications are made to Galton-Watson branching processes.  相似文献   

10.
Shy couplings     
A pair (X, Y) of Markov processes on a metric space is called a Markov coupling if X and Y have the same transition probabilities and (X, Y) is a Markov process. We say that a coupling is “shy” if inf t ≥ 0 dist(X t , Y t ) >  0 with positive probability. We investigate whether shy couplings exist for several classes of Markov processes.  相似文献   

11.
Recently, Lefèvre and Picard (Insur Math Econ 49:512–519, 2011) revisited a non-standard risk model defined on a fixed time interval [0,t]. The key assumption is that, if n claims occur during [0,t], their arrival times are distributed as the order statistics of n i.i.d. random variables with distribution function F t (s), 0?≤?s?≤?t. The present paper is concerned with two particular cases of that model, namely when F t (s) is of linear form (as for a (mixed) Poisson process), or of exponential form (as for a linear birth process with immigration or a linear death-counting process). Our main purpose is to obtain, in these cases, an expression for the non-ruin probabilities over [0,t]. This is done by exploiting properties of an underlying family of Appell polynomials. The ultimate non-ruin probabilities are then derived as a limit.  相似文献   

12.
Abstract

We postulate observations from a Poisson process whose rate parameter modulates between two values determined by an unobserved Markov chain. The theory switches from continuous to discrete time by considering the intervals between observations as a sequence of dependent random variables. A result from hidden Markov models allows us to sample from the posterior distribution of the model parameters given the observed event times using a Gibbs sampler with only two steps per iteration.  相似文献   

13.
Given a new Double-Markov risk model DM=(μ,Q,ν,H;Y,Z) and Double-Markov risk process U={U(t),t≥ 0}. The ruin or survival problem is addressed. Equations which the survival probability satisfied and the formulas of calculating survival probability are obtained. Recursion formulas of calculating the survival probability and analytic expression of recursion items are obtained. The conclusions are expressed by Q matrix for a Markov chain and transition probabilities for another Markov Chain.  相似文献   

14.
A continuous-time Markov chain which is partially observed in Poisson noise is considered, where a structural change in the dynamics of the hidden process occurs at a random change point. Filtering and change point estimation of the model is discussed. Closed-form recursive estimates of the conditional distribution of the hidden process and the random change point are obtained, given the Poisson process observations  相似文献   

15.
In a multi-type continuous time Markov branching process the asymptotic distribution of the first birth in and the last death (extinction) of the kth generation can be determined from the asymptotic behavior of the probability generating function of the vector Z(k)(t), the size of the kth generation at time t, as t tends to zero or as t tends to infinity, respectively. Apart from an appropriate transformation of the time scale, for a large initial population the generations emerge according to an independent sum of compound multi-dimensional Poisson processes and become extinct like a vector of independent reversed Poisson processes. In the first birth case the results also hold for a multi-type Bellman-Harris process if the life span distributions are differentiable at zero.  相似文献   

16.
Filiz et al. (in arXiv:0809.1393 (2008)) proposed a model for the pattern of defaults seen among a group of firms at the end of a given time period. The ingredients in the model are a graph G=(V,E), where the vertices V correspond to the firms and the edges E describe the network of interdependencies between the firms, a parameter for each vertex that captures the individual propensity of that firm to default, and a parameter for each edge that captures the joint propensity of the two connected firms to default. The correlated default model can be rewritten as a standard Ising model on the graph by identifying the set of defaulting firms in the default model with the set of sites in the Ising model for which the {±1}-valued spin is +1. We ask whether there is a suitable continuous-time Markov chain (X t ) t??0 taking values in the subsets of V such that X 0=?, X r ?X s for r??s (that is, once a firm defaults, it stays in default), the distribution of X T for some fixed time T is the one given by the default model, and the distribution of X t for other times t is described by a probability distribution in the same family as the default model. In terms of the equivalent Ising model, this corresponds to asking if it is possible to begin at time 0 with a configuration in which every spin is ?1 and then flip spins one at a time from ?1 to +1 according to Markovian dynamics so that the configuration of spins at each time is described by some Ising model and at time T the configuration is distributed according to the prescribed Ising model. We show for three simple but financially natural special cases that this is not possible outside of the trivial case where there is complete independence between the firms.  相似文献   

17.
The Tsetlin library is a very well-studied model for the way an arrangement of books on a library shelf evolves over time. One of the most interesting properties of this Markov chain is that its spectrum can be computed exactly and that the eigenvalues are linear in the transition probabilities. This result has been generalized in different ways by various people. In this work, we investigate one of the generalizations given by the extended promotion Markov chain on linear extensions of a poset P introduced by Ayyer et al. (J Algebr Comb 39(4):853–881, 2014). They showed that if the poset P is a rooted forest, the transition matrix of this Markov chain has eigenvalues that are linear in the transition probabilities and described their multiplicities. We show that the same property holds for a larger class of posets for which we also derive convergence to stationarity results.  相似文献   

18.
We propose a dynamic model to analyze the credit quality of firms. In the market in which they operate, the firms are divided into a finite number of classes representing their credit status. The cardinality of the population can increase, since new firms can enter the market and the partition is supposed to change over time, due to defaults and changes in credit quality, following a class of Markov processes. Some conditional probabilities related to default times are investigated and the role of occupation numbers is highlighted in this context. In a partial information setting at discrete time, we present a particle filtering technique to numerically compute by simulation the conditional distribution of the number of firms in the credit classes, given the information up to time t.  相似文献   

19.
This paper concerns the construction and regularity of a transition (probability) function of a non-homogeneous continuous-time Markov process with given transition rates and a general state space. Motivating from a lot of restriction in applications of a transition function with continuous (in t≥0) and conservative transition rates q(t, x, Λ), we consider the case that q(t,x,Λ) are only required to satisfy a mild measurability (in t≥0) condition, which is a generalization of the continuity condition. Under the measurability condition we construct a transition function with the given transition rates, provide a necessary and sufficient condition for it to be regular, and further obtain some interesting additional results.  相似文献   

20.
This paper presents an exact treatment of a continuous-review inventory system with compound Poisson demand, Erlang-distributed lead times and random supply interruptions. In contrast with the existing models in the literature, we take into account the supplier’s availability in characterizing the lead time of a replenishment order. Assuming that the supplier’s availability can be described by a continuous-time homogeneous Markov chain with two states (on and off) and that stockouts are lost, we derive the stationary distribution of the inventory level (stock-on-hand) under an (s, Q)-type control policy. This probability distribution is then used to formulate an exact expression for the long-run average cost per unit time of operating the inventory system. Some numerical results are also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号