首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A Markov chain is a natural probability model for accounts receivable. For example, accounts that are ‘current’ this month have a probability of moving next month into ‘current’, ‘delinquent’ or ‘paid‐off’ states. If the transition matrix of the Markov chain were known, forecasts could be formed for future months for each state. This paper applies a Markov chain model to subprime loans that appear neither homogeneous nor stationary. Innovative estimation methods for the transition matrix are proposed. Bayes and empirical Bayes estimators are derived where the population is divided into segments or subpopulations whose transition matrices differ in some, but not all entries. Loan‐level models for key transition matrix entries can be constructed where loan‐level covariates capture the non‐stationarity of the transition matrix. Prediction is illustrated on a $7 billion portfolio of subprime fixed first mortgages and the forecasts show good agreement with actual balances in the delinquency states. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
The subdominant eigenvalue of the transition probability matrix of a Markov chain is a determining factor in the speed of transition of the chain to a stationary state. However, these eigenvalues can be difficult to estimate in a theoretical sense. In this paper we revisit the problem of dynamically organizing a linear list. Items in the list are selected with certain unknown probabilities and then returned to the list according to one of two schemes: the move-to-front scheme or the transposition scheme. The eigenvalues of the transition probability matrix Q of the former scheme are well-known but those of the latter T are not. Nevertheless the transposition scheme gives rise to a reversible Markov chain. This enables us to employ a generalized Rayleigh-Ritz theorem to show that the subdominant eigenvalue of T is at least as large as the subdominant eigenvalue of Q.  相似文献   

3.
A discrete time Markov chain assumes that the population is homogeneous, each individual in the population evolves according to the same transition matrix. In contrast, a discrete mover‐stayer (MS) model postulates a simple form of population heterogeneity; in each initial state, there is a proportion of individuals who never leave this state (stayers) and the complementary proportion of individuals who evolve according to a Markov chain (movers). The MS model was extended by specifying the stayer's probability to be a logistic function of an individual's covariates but leaving the same transition matrix for all movers. We further extend the MS model by allowing each mover to have her/his covariates dependent transition matrix. The model for a mover's transition matrix is related to the extant Markov chains mixture model with mixing on the speed of movement of Markov chains. The proposed model is estimated using the expectation‐maximization algorithm and illustrated with a large data set on car loans and the simulation.  相似文献   

4.
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, …  相似文献   

5.
We study the limit behaviour of a nonlinear differential equation whose solution is a superadditive generalisation of a stochastic matrix, prove convergence, and provide necessary and sufficient conditions for ergodicity. In the linear case, the solution of our differential equation is equal to the matrix exponential of an intensity matrix and can then be interpreted as the transition operator of a homogeneous continuous-time Markov chain. Similarly, in the generalised nonlinear case that we consider, the solution can be interpreted as the lower transition operator of a specific set of non-homogeneous continuous-time Markov chains, called an imprecise continuous-time Markov chain. In this context, our convergence result shows that for a fixed initial state, an imprecise continuous-time Markov chain always converges to a limiting distribution, and our ergodicity result provides a necessary and sufficient condition for this limiting distribution to be independent of the initial state.  相似文献   

6.
Kingman and Williams [6] showed that a pattern of positive elements can occur in a transition matrix of a finite state, nonhomogeneous Markov chain if and only if it may be expressed as a finite product of reflexive and transitive patterns. In this paper we solve a similar problem for doubly stochastic chains. We prove that a pattern of positive elements can occur in a transition matrix of a doubly stochastic Markov chain if and only if it may be expressed as a finite product of reflexive, transitive, and symmetric patterns. We provide an algorithm for determining whether a given pattern may be expressed as a finite product of reflexive, transitive, and symmetric patterns. This result has implications for the embedding problem for doubly stochastic Markov chains. We also give the application of the obtained characterization to the chain majorization.  相似文献   

7.
Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun [11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices.  相似文献   

8.
Two eigenvalue measures of immobility are proposed for social processes described by a Markov chain. One is the second largest eigenvalue modulus of the chain's transition matrix. The other is the second largest eigenvalue modulus of a closely related transition matrix. The two eigenvalue measures are compared to each other and to correlation and regression‐to‐the‐mean measures. In illustrative applications to intergenerational occupational mobility, the eigenvectors corresponding to the eigenvalue measures are found to be good proxies for occupational status rankings for a number of countries, thus reinforcing a pattern noted by Klatsky and Hodge and by Duncan‐Jones.  相似文献   

9.
A maximum out forest of a digraph is its spanning subgraph that consists of disjoint diverging trees and has the maximum possible number of arcs. For an arbitrary weighted digraph, we consider a matrix of specific weights of maximum out forests and demonstrate how this matrix can be used to get a graph-theoretic interpretation for the limiting probabilities of Markov chains. For a special (nonclassical) correspondence between Markov chains and weighted digraphs, the matrix of Cesáro limiting transition probabilities of any finite homogeneous Markov chain coincides with the normalized matrix of maximum out forests of the corresponding digraphs. This provides a finite (combinatorial) method to calculate the limiting probabilities of Markov chains and thus their stationary distributions. On the other hand, the Markov chain technique provides the proofs to some statements about digraphs.  相似文献   

10.
In this paper, we use the Markov chain censoring technique to study infinite state Markov chains whose transition matrices possess block-repeating entries. We demonstrate that a number of important probabilistic measures are invariant under censoring. Informally speaking, these measures involve first passage times or expected numbers of visits to certain levels where other levels are taboo; they are closely related to the so-called fundamental matrix of the Markov chain which is also studied here. Factorization theorems for the characteristic equation of the blocks of the transition matrix are obtained. Necessary and sufficient conditions are derived for such a Markov chain to be positive recurrent, null recurrent, or transient based either on spectral analysis, or on a property of the fundamental matrix. Explicit expressions are obtained for key probabilistic measures, including the stationary probability vector and the fundamental matrix, which could be potentially used to develop various recursive algorithms for computing these measures.  相似文献   

11.
We consider an accessibility index for the states of a discrete-time, ergodic, homogeneous Markov chain on a finite state space; this index is naturally associated with the random walk centrality introduced by Noh and Reiger (2004) for a random walk on a connected graph. We observe that the vector of accessibility indices provides a partition of Kemeny’s constant for the Markov chain. We provide three characterizations of this accessibility index: one in terms of the first return time to the state in question, and two in terms of the transition matrix associated with the Markov chain. Several bounds are provided on the accessibility index in terms of the eigenvalues of the transition matrix and the stationary vector, and the bounds are shown to be tight. The behaviour of the accessibility index under perturbation of the transition matrix is investigated, and examples exhibiting some counter-intuitive behaviour are presented. Finally, we characterize the situation in which the accessibility indices for all states coincide.  相似文献   

12.
The main aim of this paper is to examine the applicability of generalized inverses to a wide variety of problems in applied probability where a Markov chain is present either directly or indirectly through some form of imbedding. By characterizing all generalized inverses of IP, where P is the transition matrix of a finite irreducible discrete time Markov chain, we are able to obtain general procedures for finding stationary distributions, moments of the first passage time distributions, and asymptotic forms for the moments of the occupation-time random variables. It is shown that all known explicit methods for examining these problems can be expressed in this generalized inverse framework. More generally, in the context of a Markov renewal process setting the aforementioned problems are also examined using generalized inverses of IP. As a special case, Markov chains in continuous time are considered, and we show that the generalized inverse technique can be applied directly to the infinitesimal generator of the process, instead of to IP, where P is the transition matrix of the discrete time jump Markov chain.  相似文献   

13.
This paper proposes a single sample path-based sensitivity estimation method for discrete event systems. The method employs two major techniques: uniformization and importance sampling. By uniformization, steady-state performance measures can be estimated via the transition matrix of the embedded Markov chain in the uniformized process. The sensitivity of a transition matrix is obtained by applying importance sampling to an ensemble average of sample paths. The algorithm developed for this method is easy to be implemented; the method applies to more systems than infinitesimal perturbation analysis.  相似文献   

14.
The Tsetlin library is a very well-studied model for the way an arrangement of books on a library shelf evolves over time. One of the most interesting properties of this Markov chain is that its spectrum can be computed exactly and that the eigenvalues are linear in the transition probabilities. This result has been generalized in different ways by various people. In this work, we investigate one of the generalizations given by the extended promotion Markov chain on linear extensions of a poset P introduced by Ayyer et al. (J Algebr Comb 39(4):853–881, 2014). They showed that if the poset P is a rooted forest, the transition matrix of this Markov chain has eigenvalues that are linear in the transition probabilities and described their multiplicities. We show that the same property holds for a larger class of posets for which we also derive convergence to stationarity results.  相似文献   

15.
Breuer  Lothar 《Queueing Systems》2003,45(1):47-57
In this paper, the multi-server queue with general service time distribution and Lebesgue-dominated iid inter-arival times is analyzed. This is done by introducing auxiliary variables for the remaining service times and then examining the embedded Markov chain at arrival instants. The concept of piecewise-deterministic Markov processes is applied to model the inter-arrival behaviour. It turns out that the transition probability kernel of the embedded Markov chain at arrival instants has the form of a lower Hessenberg matrix and hence admits an operator–geometric stationary distribution. Thus it is shown that matrix–analytical methods can be extended to provide a modeling tool even for the general multi-server queue.  相似文献   

16.
Our initial motivation was to understand links between Wiener-Hopf factorizations for random walks and LU-factorizations for Markov chains as interpreted by Grassman (Eur. J. Oper. Res. 31(1):132–139, 1987). Actually, the first ones are particular cases of the second ones, up to Fourier transforms. To show this, we produce a new proof of LU-factorizations which is valid for any Markov chain with a denumerable state space equipped with a pre-order relation. Factors have nice interpretations in terms of subordinated Markov chains. In particular, the LU-factorization of the potential matrix determines the law of the global minimum of the Markov chain. For any matrix, there are two main LU-factorizations according as you decide to enter 1 in the diagonal of the first or of the second factor. When we factorize the generator of a Markov chain, one factorization is always valid while the other requires some hypothesis on the graph of the transition matrix. This dissymmetry comes from the fact that the class of sub-stochastic matrices is not stable under transposition. We generalize our work to the class of matrices with spectral radius less than one; this allows us to play with transposition and thus with time-reversal. We study some particular cases such as: skip-free Markov chains, random walks (this gives the WH-factorization), reversible Markov chains (this gives the Cholesky factorization). We use the LU-factorization to compute invariant measures. We present some pathologies: non-associativity, non-unicity; these can be cured by smooth assumptions (e.g. irreductibility).  相似文献   

17.
In this note, we discuss a Markov chain formulation of the k-SAT problem and the properties of the resulting transition matrix. The motivation behind this work is to relate the phase transition in the k-SAT problem to the phenomenon of “cut-off” in Markov chains. We use the idea of weak-lumpability to reduce the dimension of our transition matrix to manageable proportions.  相似文献   

18.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

19.
Mixing time quantifies the convergence speed of a Markov chain to the stationary distribution. It is an important quantity related to the performance of MCMC sampling. It is known that the mixing time of a reversible chain can be significantly improved by lifting, resulting in an irreversible chain, while changing the topology of the chain. We supplement this result by showing that if the connectivity graph of a Markov chain is a cycle, then there is an Ω(n2) lower bound for the mixing time. This is the same order of magnitude that is known for reversible chains on the cycle.  相似文献   

20.
??An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号