首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A discrete time Markov chain assumes that the population is homogeneous, each individual in the population evolves according to the same transition matrix. In contrast, a discrete mover‐stayer (MS) model postulates a simple form of population heterogeneity; in each initial state, there is a proportion of individuals who never leave this state (stayers) and the complementary proportion of individuals who evolve according to a Markov chain (movers). The MS model was extended by specifying the stayer's probability to be a logistic function of an individual's covariates but leaving the same transition matrix for all movers. We further extend the MS model by allowing each mover to have her/his covariates dependent transition matrix. The model for a mover's transition matrix is related to the extant Markov chains mixture model with mixing on the speed of movement of Markov chains. The proposed model is estimated using the expectation‐maximization algorithm and illustrated with a large data set on car loans and the simulation.  相似文献   

2.
The usual tool for modelling bond ratings migration is a discrete, time‐homogeneous Markov chain. Such model assumes that all bonds are homogeneous with respect to their movement behaviour among rating categories and that the movement behaviour does not change over time. However, among recognized sources of heterogeneity in ratings migration is age of a bond (time elapsed since issuance). It has been observed that young bonds have a lower propensity to change ratings, and thus to default, than more seasoned bonds. The aim of this paper is to introduce a continuous, time‐non‐homogeneous model for bond ratings migration, which also incorporates a simple form of population heterogeneity. The specific form of heterogeneity postulated by the proposed model appears to be suitable for modelling the effect of age of a bond on its propensity to change ratings. This model, called a mover–stayer model, is an extension of a Markov chain. This paper derives the maximum likelihood estimators for the parameters of a continuous time mover–stayer model based on a sample of independent continuously monitored histories of the process, and develops the likelihood ratio statistic for discriminating between the Markov chain and the mover–stayer model. The methods are illustrated using a sample of rating histories of young corporate issuers. For these issuers the default probabilities predicted by the Markov chain and mover–stayer models are different. In particular for 1–4 years old bonds the mover–stayer model estimates substantially lower default probabilities from rating C than a Markov chain. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

3.
Markov chain theory is proving to be a powerful approach to bootstrap finite states processes, especially where time dependence is non linear. In this work we extend such approach to bootstrap discrete time continuous-valued processes. To this purpose we solve a minimization problem to partition the state space of a continuous-valued process into a finite number of intervals or unions of intervals (i.e. its states) and identify the time lags which provide “memory” to the process. A distance is used as objective function to stimulate the clustering of the states having similar transition probabilities. The problem of the exploding number of alternative partitions in the solution space (which grows with the number of states and the order of the Markov chain) is addressed through a Tabu Search algorithm. The method is applied to bootstrap the series of the German and Spanish electricity prices. The analysis of the results confirms the good consistency properties of the method we propose.  相似文献   

4.
This paper proposes an extension of Merton's jump‐diffusion model to reflect the time inhomogeneity caused by changes of market states. The benefit is that it simultaneously captures two salient features in asset returns: heavy tailness and volatility clustering. On the basis of an empirical analysis where jumps are found to happen much more frequently in risky periods than in normal periods, we assume that the Poisson process for driving jumps is governed by a two‐state on‐off Markov chain. This makes jumps happen interruptedly and helps to generate different dynamics under these two states. We provide a full analysis for the proposed model and derive the recursive formulas for the conditional state probabilities of the underlying Markov chain. These analytical results lead to an algorithm that can be implemented to determine the prices of European options under normal and risky states. Numerical examples are given to demonstrate how time inhomogeneity influences return distributions, option prices, and volatility smiles. The contrasting patterns seen in different states indicate the insufficiency of using time‐homogeneous models and justify the use of the proposed model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Members of a population of fixed size N can be in any one of n states. In discrete time the individuals jump from one state to another, independently of each other, and with probabilities described by a homogeneous Markov chain. At each time a sample of size M is withdrawn, (with replacement). Based on these observations, and using the techniques of Hidden Markov Models, recursive estimates for the distribution of the population are obtained  相似文献   

6.
A Markov chain is a natural probability model for accounts receivable. For example, accounts that are ‘current’ this month have a probability of moving next month into ‘current’, ‘delinquent’ or ‘paid‐off’ states. If the transition matrix of the Markov chain were known, forecasts could be formed for future months for each state. This paper applies a Markov chain model to subprime loans that appear neither homogeneous nor stationary. Innovative estimation methods for the transition matrix are proposed. Bayes and empirical Bayes estimators are derived where the population is divided into segments or subpopulations whose transition matrices differ in some, but not all entries. Loan‐level models for key transition matrix entries can be constructed where loan‐level covariates capture the non‐stationarity of the transition matrix. Prediction is illustrated on a $7 billion portfolio of subprime fixed first mortgages and the forecasts show good agreement with actual balances in the delinquency states. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
A model is developed for pricing volatility derivatives, such as variance swaps and volatility swaps under a continuous‐time Markov‐modulated version of the stochastic volatility (SV) model developed by Heston. In particular, it is supposed that the parameters of this version of Heston's SV model depend on the states of a continuous‐time observable Markov chain process, which can be interpreted as the states of an observable macroeconomic factor. The market considered is incomplete in general, and hence, there is more than one equivalent martingale pricing measure. The regime switching Esscher transform used by Elliott et al. is adopted to determine a martingale pricing measure for the valuation of variance and volatility swaps in this incomplete market. Both probabilistic and partial differential equation (PDE) approaches are considered for the valuation of volatility derivatives.  相似文献   

8.
We consider portfolio optimization in a regime‐switching market. The assets of the portfolio are modeled through a hidden Markov model (HMM) in discrete time, where drift and volatility of the single assets are allowed to switch between different states. We consider different parametrizations of the involved asset covariances: statewise uncorrelated assets (though linked through the common Markov chain), assets correlated in a state‐independent way, and assets where the correlation varies from state to state. As a benchmark, we also consider a model without regime switches. We utilize a filter‐based expectation‐maximization (EM) algorithm to obtain optimal parameter estimates within this multivariate HMM and present parameter estimators in all three HMM settings. We discuss the impact of these different models on the performance of several portfolio strategies. Our findings show that for simulated returns, our strategies in many settings outperform naïve investment strategies, like the equal weights strategy. Information criteria can be used to detect the best model for estimation as well as for portfolio optimization. A second study using real data confirms these findings.  相似文献   

9.
Cure models represent an appealing tool when analyzing default time data where two groups of companies are supposed to coexist: those which could eventually experience a default (uncured) and those which could not develop an endpoint (cured). One of their most interesting properties is the possibility to distinguish among covariates exerting their influence on the probability of belonging to the populations’ uncured fraction, from those affecting the default time distribution. This feature allows a separate analysis of the two dimensions of the default risk: whether the default can occur and when it will occur, given that it can occur. Basing our analysis on a large sample of Italian firms, the probability of being uncured is here estimated with a binary logit regression, whereas a discrete time version of a Cox's proportional hazards approach is used to model the time distribution of defaults. The extension of the cure model as a forecasting framework is then accomplished by replacing the discrete time baseline function with an appropriate time‐varying system level covariate, able to capture the underlying macroeconomic cycle. We propose a holdout sample procedure to test the classification power of the cure model. When compared with a single‐period logit regression and a standard duration analysis approach, the cure model has proven to be more reliable in terms of the overall predictive performance. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high-dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed for this purpose.  相似文献   

11.
A Markov chain plays an important role in an interacting multiple model (IMM) algorithm which has been shown to be effective for target tracking systems. Such systems are described by a mixing of continuous states and discrete modes. The switching between system modes is governed by a Markov chain. In real world applications, this Markov chain may change or needs to be changed. Therefore, one may be concerned about a target tracking algorithm with the switching of a Markov chain. This paper concentrates on fault-tolerant algorithm design and algorithm analysis of IMM estimation with the switching of a Markov chain. Monte Carlo simulations are carried out and several conclusions are given.  相似文献   

12.
We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related to an idea known as PASTA (Poisson arrivals see time averages) in the operations research literature. Because the new algorithm can be run using a read‐once stream of randomness, we call it read‐once CFTP. The memory and time requirements of read‐once CFTP are on par with the requirements of the usual form of CFTP, and for a variety of applications the requirements may be noticeably less. Some perfect sampling algorithms for point processes are based on an extension of CFTP known as coupling into and from the past; for completeness, we give a read‐once version of coupling into and from the past, but it remains unpractical. For these point process applications, we give an alternative coupling method with which read‐once CFTP may be efficiently used. ©2000 John Wiley & Sons, Inc. Random Struct. Alg., 16: 85–113, 2000  相似文献   

13.
《随机分析与应用》2013,31(4):935-951
Abstract

In this paper, we investigate the stochastic stabilization problem for a class of linear discrete time‐delay systems with Markovian jump parameters. The jump parameters considered here is modeled by a discrete‐time Markov chain. Our attention is focused on the design of linear state feedback memoryless controller such that stochastic stability of the resulting closed‐loop system is guaranteed when the system under consideration is either with or without parameter uncertainties. Sufficient conditions are proposed to solve the above problems, which are in terms of a set of solutions of coupled matrix inequalities.  相似文献   

14.
We prove necessary and sufficient conditions for the transience of the non-zero states in a non-homogeneous, continuous time Markov branching process. The result is obtained by passing from results about the discrete time skeleton of the continuous time chain to the continuous time chain itself. An alternative proof of a result for continuous time Markov branching processes in random environments is then given, showing that earlier moment conditions were not necessary.  相似文献   

15.
An initial test of the discrete‐time Markov model in the study of educational aspirations throughout high school was carried out. The design of the study permitted testing for sex differences and order effects The results indicate a good fit between the data and the model across several cohorts of students. Order effects were apparent, but sex differences in the transition probabilities were not found. Future change in aspirations appears least likely for students with a history of stable college plans, while it is most likely for those who start with non‐college aspirations and change to college plans.  相似文献   

16.
In a hidden Markov model, the underlying Markov chain is usually unobserved. Often, the state path with maximum posterior probability (Viterbi path) is used as its estimate. Although having the biggest posterior probability, the Viterbi path can behave very atypically by passing states of low marginal posterior probability. To avoid such situations, the Viterbi path can be modified to bypass such states. In this article, an iterative procedure for improving the Viterbi path in such a way is proposed and studied. The iterative approach is compared with a simple batch approach where a number of states with low probability are all replaced at the same time. It can be seen that the iterative way of adjusting the Viterbi state path is more efficient and it has several advantages over the batch approach. The same iterative algorithm for improving the Viterbi path can be used when it is possible to reveal some hidden states and estimating the unobserved state sequence can be considered as an active learning task. The batch approach as well as the iterative approach are based on classification probabilities of the Viterbi path. Classification probabilities play an important role in determining a suitable value for the threshold parameter used in both algorithms. Therefore, properties of classification probabilities under different conditions on the model parameters are studied.  相似文献   

17.
Abstract

We postulate observations from a Poisson process whose rate parameter modulates between two values determined by an unobserved Markov chain. The theory switches from continuous to discrete time by considering the intervals between observations as a sequence of dependent random variables. A result from hidden Markov models allows us to sample from the posterior distribution of the model parameters given the observed event times using a Gibbs sampler with only two steps per iteration.  相似文献   

18.
We investigate an autoregressive diffusion approximation method applied to the Wright-Fisher model in population genetics by considering a Markov chain with Bernoulli distributed independent variables. The use of an autoregressive diffusion method and an averaged allelic frequency process lead to an Orn-stein-Uhlenbeck diffusion process with discrete time. The normalized averaged frequency process possesses independent allele frequency indicators with constant conditional variance at equilibrium. In a monoecious diploid population of size N with r generations, we consider the time to equilibrium of averaged allele frequency in a single-locus two allele pure sampling model.  相似文献   

19.
A fluctuation theory for Markov chains on an ordered countable state space is developed, using ladder processes. These are shown to be Markov renewal processes. Results are given for the joint distribution of the extremum (maximum or minimum) and the first time the extremum is achieved. Also a new classification of the states of a Markov chain is suggested. Two examples are given.  相似文献   

20.
We consider discrete-time single-server queues fed by independent, heterogeneous sources with geometrically distributed idle periods. While being active, each source generates some cells depending on the state of the underlying Markov chain. We first derive a general and explicit formula for the mean buffer contents in steady state when the underlying Markov chain of each source has finite states. Next we show the applicability of the general formula to queues fed by independent sources with infinite-state underlying Markov chains and discrete phase-type active periods. We then provide explicit formulas for the mean buffer contents in queues with Markovian autoregressive sources and greedy sources. Further we study two limiting cases in general settings, one is that the lengths of active periods of each source are governed by an infinite-state absorbing Markov chain, and the other is the model obtained by the limit such that the number of sources goes to infinity under an appropriate normalizing condition. As you will see, the latter limit leads to a queue with (generalized) M/G/∞ input sources. We provide sufficient conditions under which the general formula is applicable to these limiting cases.AMS subject classification: 60K25, 60K37, 60J10This revised version was published online in June 2005 with corrected coverdate  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号