首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A discrete‐time mover‐stayer (MS) model is an extension of a discrete‐time Markov chain, which assumes a simple form of population heterogeneity. The individuals in the population are either stayers, who never leave their initial states or movers who move according to a Markov chain. We, in turn, propose an extension of the MS model by specifying the stayer's probability as a logistic function of an individual's covariates. Such extension has been recently discussed for a continuous time MS but has not been considered before for a discrete time one. This extension allows for an in‐sample classification of subjects who never left their initial states into stayers or movers. The parameters of an extended MS model are estimated using the expectation‐maximization algorithm. A novel bootstrap procedure is proposed for out of sample validation of the in‐sample classification. The bootstrap procedure is also applied to validate the in‐sample classification with respect to a more general dichotomy than the MS one. The developed methods are illustrated with the data set on installment loans. But they can be applied more broadly in credit risk area, where prediction of creditworthiness of a loan borrower or lessee is of major interest.  相似文献   

2.
A Markov chain is a natural probability model for accounts receivable. For example, accounts that are ‘current’ this month have a probability of moving next month into ‘current’, ‘delinquent’ or ‘paid‐off’ states. If the transition matrix of the Markov chain were known, forecasts could be formed for future months for each state. This paper applies a Markov chain model to subprime loans that appear neither homogeneous nor stationary. Innovative estimation methods for the transition matrix are proposed. Bayes and empirical Bayes estimators are derived where the population is divided into segments or subpopulations whose transition matrices differ in some, but not all entries. Loan‐level models for key transition matrix entries can be constructed where loan‐level covariates capture the non‐stationarity of the transition matrix. Prediction is illustrated on a $7 billion portfolio of subprime fixed first mortgages and the forecasts show good agreement with actual balances in the delinquency states. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
Various process models for discrete manufacturing systems (parts industry) can be treated as bounded discrete-space Markov chains, completely characterized by the original in-control state and a transition matrix for shifts to an out-of-control state. The present work extends these models by using a continuous-state Markov chain, incorporating non-random corrective actions. These actions are to be realized according to the statistical process control (SPC) technique and should substantially affect the model. The developed stochastic model yields Laplace distribution of a process mean. Real-data tests confirm its applicability for the parts industry and show that the distribution parameter is mainly controlled by the SPC sample size.  相似文献   

4.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

5.
The usual tool for modelling bond ratings migration is a discrete, time‐homogeneous Markov chain. Such model assumes that all bonds are homogeneous with respect to their movement behaviour among rating categories and that the movement behaviour does not change over time. However, among recognized sources of heterogeneity in ratings migration is age of a bond (time elapsed since issuance). It has been observed that young bonds have a lower propensity to change ratings, and thus to default, than more seasoned bonds. The aim of this paper is to introduce a continuous, time‐non‐homogeneous model for bond ratings migration, which also incorporates a simple form of population heterogeneity. The specific form of heterogeneity postulated by the proposed model appears to be suitable for modelling the effect of age of a bond on its propensity to change ratings. This model, called a mover–stayer model, is an extension of a Markov chain. This paper derives the maximum likelihood estimators for the parameters of a continuous time mover–stayer model based on a sample of independent continuously monitored histories of the process, and develops the likelihood ratio statistic for discriminating between the Markov chain and the mover–stayer model. The methods are illustrated using a sample of rating histories of young corporate issuers. For these issuers the default probabilities predicted by the Markov chain and mover–stayer models are different. In particular for 1–4 years old bonds the mover–stayer model estimates substantially lower default probabilities from rating C than a Markov chain. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
The transition matrix of a discrete Markov chain is called monotone if each row stochastically dominates the row above it. Monotonicity is an ideal assumption to impose on a Markov chain model of mobility. Monotonicity is behaviorally weak yet mathematically strong. It is behaviorally weak in the sense that it is theoretically plausible and is empirically supported. It is mathematically strong in the sense that monotone Markov chains have a number of convenient mathematical properties. This paper reviews the convenient properties and applies the monotonicity concept to immobility measurement.  相似文献   

7.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
We study the limit behaviour of a nonlinear differential equation whose solution is a superadditive generalisation of a stochastic matrix, prove convergence, and provide necessary and sufficient conditions for ergodicity. In the linear case, the solution of our differential equation is equal to the matrix exponential of an intensity matrix and can then be interpreted as the transition operator of a homogeneous continuous-time Markov chain. Similarly, in the generalised nonlinear case that we consider, the solution can be interpreted as the lower transition operator of a specific set of non-homogeneous continuous-time Markov chains, called an imprecise continuous-time Markov chain. In this context, our convergence result shows that for a fixed initial state, an imprecise continuous-time Markov chain always converges to a limiting distribution, and our ergodicity result provides a necessary and sufficient condition for this limiting distribution to be independent of the initial state.  相似文献   

9.
Members of a population of fixed size N can be in any one of n states. In discrete time the individuals jump from one state to another, independently of each other, and with probabilities described by a homogeneous Markov chain. At each time a sample of size M is withdrawn, (with replacement). Based on these observations, and using the techniques of Hidden Markov Models, recursive estimates for the distribution of the population are obtained  相似文献   

10.
Two eigenvalue measures of immobility are proposed for social processes described by a Markov chain. One is the second largest eigenvalue modulus of the chain's transition matrix. The other is the second largest eigenvalue modulus of a closely related transition matrix. The two eigenvalue measures are compared to each other and to correlation and regression‐to‐the‐mean measures. In illustrative applications to intergenerational occupational mobility, the eigenvectors corresponding to the eigenvalue measures are found to be good proxies for occupational status rankings for a number of countries, thus reinforcing a pattern noted by Klatsky and Hodge and by Duncan‐Jones.  相似文献   

11.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

12.
Kingman and Williams [6] showed that a pattern of positive elements can occur in a transition matrix of a finite state, nonhomogeneous Markov chain if and only if it may be expressed as a finite product of reflexive and transitive patterns. In this paper we solve a similar problem for doubly stochastic chains. We prove that a pattern of positive elements can occur in a transition matrix of a doubly stochastic Markov chain if and only if it may be expressed as a finite product of reflexive, transitive, and symmetric patterns. We provide an algorithm for determining whether a given pattern may be expressed as a finite product of reflexive, transitive, and symmetric patterns. This result has implications for the embedding problem for doubly stochastic Markov chains. We also give the application of the obtained characterization to the chain majorization.  相似文献   

13.
In previous work, the embedding problem is examined within the entire set of discrete-time Markov chains. However, for several phenomena, the states of a Markov model are ordered categories and the transition matrix is state-wise monotone. The present paper investigates the embedding problem for the specific subset of state-wise monotone Markov chains. We prove necessary conditions on the transition matrix of a discrete-time Markov chain with ordered states to be embeddable in a state-wise monotone Markov chain regarding time-intervals with length 0.5: A transition matrix with a square root within the set of state-wise monotone matrices has a trace at least equal to 1.  相似文献   

14.
《随机分析与应用》2013,31(4):935-951
Abstract

In this paper, we investigate the stochastic stabilization problem for a class of linear discrete time‐delay systems with Markovian jump parameters. The jump parameters considered here is modeled by a discrete‐time Markov chain. Our attention is focused on the design of linear state feedback memoryless controller such that stochastic stability of the resulting closed‐loop system is guaranteed when the system under consideration is either with or without parameter uncertainties. Sufficient conditions are proposed to solve the above problems, which are in terms of a set of solutions of coupled matrix inequalities.  相似文献   

15.
In this paper, we use the Markov chain censoring technique to study infinite state Markov chains whose transition matrices possess block-repeating entries. We demonstrate that a number of important probabilistic measures are invariant under censoring. Informally speaking, these measures involve first passage times or expected numbers of visits to certain levels where other levels are taboo; they are closely related to the so-called fundamental matrix of the Markov chain which is also studied here. Factorization theorems for the characteristic equation of the blocks of the transition matrix are obtained. Necessary and sufficient conditions are derived for such a Markov chain to be positive recurrent, null recurrent, or transient based either on spectral analysis, or on a property of the fundamental matrix. Explicit expressions are obtained for key probabilistic measures, including the stationary probability vector and the fundamental matrix, which could be potentially used to develop various recursive algorithms for computing these measures.  相似文献   

16.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

17.
A class of models called interactive Markov chains is studied in both discrete and continuous time. These models were introduced by Conlisk and serve as a rich class for sociological modeling, because they allow for interactions among individuals. In discrete time, it is proved that the Markovian processes converge to a deterministic process almost surely as the population size becomes infinite. More importantly, the normalized process is shown to be asymptotically normal with specified mean vector and covariance matrix. In continuous time, the chain is shown to converge weakly to a diffusion process with specified drift and scale terms. The distributional results will allow for the construction of a likelihood function from interactive Markov chain data, so these results will be important for questions of statistical inference. An example from manpower planning is given which indicates the use of this theory in constructing and evaluating control policies for certain social systems.  相似文献   

18.
Potential Theory for ergodic Markov chains (with a discrete state spare and a continuous parameter) is developed in terms of the fundamental matrix of a chain.A notion of an ergodic potential for a chain is introduced and a form of Riesz decomposition theorem for measures is proved. Ergodic potentials of charges (with total charge zero) are shown to play the role of Green potentials for transient chains.  相似文献   

19.
We propose a computational approach for implementing discrete hidden semi-Markov chains. A discrete hidden semi-Markov chain is composed of a non-observable or hidden process which is a finite semi-Markov chain and a discrete observable process. Hidden semi-Markov chains possess both the flexibility of hidden Markov chains for approximating complex probability distributions and the flexibility of semi-Markov chains for representing temporal structures. Efficient algorithms for computing characteristic distributions organized according to the intensity, interval and counting points of view are described. The proposed computational approach in conjunction with statistical inference algorithms previously proposed makes discrete hidden semi-Markov chains a powerful model for the analysis of samples of non-stationary discrete sequences. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

20.
《Optimization》2012,61(2-3):271-283
This paper presents a new concept of Markov decision processes: continuous time shock Markov decision processes, which model Markovian controlled systems sequentially shocked by its environment. Between two adjacent shocks, the system can be modeled by continuous time Markov decision processes. But according to each shock, the system's parameters are changed and an instantaneous state transition occurs. After presenting the model, we prove that the optimality equation, which consists of countable equations, has a unique solution in some function space Ω  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号