首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we use the Markov chain censoring technique to study infinite state Markov chains whose transition matrices possess block-repeating entries. We demonstrate that a number of important probabilistic measures are invariant under censoring. Informally speaking, these measures involve first passage times or expected numbers of visits to certain levels where other levels are taboo; they are closely related to the so-called fundamental matrix of the Markov chain which is also studied here. Factorization theorems for the characteristic equation of the blocks of the transition matrix are obtained. Necessary and sufficient conditions are derived for such a Markov chain to be positive recurrent, null recurrent, or transient based either on spectral analysis, or on a property of the fundamental matrix. Explicit expressions are obtained for key probabilistic measures, including the stationary probability vector and the fundamental matrix, which could be potentially used to develop various recursive algorithms for computing these measures.  相似文献   

2.
Classifying the states of a finite Markov chain requires the identification of all irreducible closed sets and the set of transient states. This paper presents an algorithm for identifying these states that executes in time O(MAX(|V|, |E|)) where number of states and |E| is the number of positive entries in the Markov matrix. The algorithm finds the closed strongly connected components of the transition graph using a depth-first search.  相似文献   

3.
A discrete time Markov chain assumes that the population is homogeneous, each individual in the population evolves according to the same transition matrix. In contrast, a discrete mover‐stayer (MS) model postulates a simple form of population heterogeneity; in each initial state, there is a proportion of individuals who never leave this state (stayers) and the complementary proportion of individuals who evolve according to a Markov chain (movers). The MS model was extended by specifying the stayer's probability to be a logistic function of an individual's covariates but leaving the same transition matrix for all movers. We further extend the MS model by allowing each mover to have her/his covariates dependent transition matrix. The model for a mover's transition matrix is related to the extant Markov chains mixture model with mixing on the speed of movement of Markov chains. The proposed model is estimated using the expectation‐maximization algorithm and illustrated with a large data set on car loans and the simulation.  相似文献   

4.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

5.
We prove that if a certain row of the transition probability matrix of a regular Markov chain is subtracted from the other rows of this matrix and then this row and the corresponding column are deleted, then the spectral radius of the matrix thus obtained is less than 1. We use this property of a regular Markov chain for the construction of an iterative process for the solution of the Howard system of equations, which appears in the course of investigation of controlled Markov chains with single ergodic class and, possibly, transient states.  相似文献   

6.
7.
In previous work, the embedding problem is examined within the entire set of discrete-time Markov chains. However, for several phenomena, the states of a Markov model are ordered categories and the transition matrix is state-wise monotone. The present paper investigates the embedding problem for the specific subset of state-wise monotone Markov chains. We prove necessary conditions on the transition matrix of a discrete-time Markov chain with ordered states to be embeddable in a state-wise monotone Markov chain regarding time-intervals with length 0.5: A transition matrix with a square root within the set of state-wise monotone matrices has a trace at least equal to 1.  相似文献   

8.
We consider an accessibility index for the states of a discrete-time, ergodic, homogeneous Markov chain on a finite state space; this index is naturally associated with the random walk centrality introduced by Noh and Reiger (2004) for a random walk on a connected graph. We observe that the vector of accessibility indices provides a partition of Kemeny’s constant for the Markov chain. We provide three characterizations of this accessibility index: one in terms of the first return time to the state in question, and two in terms of the transition matrix associated with the Markov chain. Several bounds are provided on the accessibility index in terms of the eigenvalues of the transition matrix and the stationary vector, and the bounds are shown to be tight. The behaviour of the accessibility index under perturbation of the transition matrix is investigated, and examples exhibiting some counter-intuitive behaviour are presented. Finally, we characterize the situation in which the accessibility indices for all states coincide.  相似文献   

9.
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, …  相似文献   

10.
We estimate the probability of delinquency and default for a sample of credit card loans using intensity models, via semi-parametric multiplicative hazard models with time-varying covariates. It is the first time these models, previously applied for the estimation of rating transitions, are used on retail loans. Four states are defined in this non-homogenous Markov chain: up-to-date, one month in arrears, two months in arrears, and default; where transitions between states are affected by individual characteristics of the debtor at application and their repayment behaviour since. These intensity estimations allow for insights into the factors that affect movements towards (and recovery from) delinquency, and into default (or not). Results indicate that different types of debtors behave differently while in different states. The probabilities estimated for each type of transition are then used to make out-of-sample predictions over a specified period of time.  相似文献   

11.
We study a generalization of Holteʼs amazing matrix, the transition probability matrix of the Markov chains of the ‘carries’ in a non-standard numeration system. The stationary distributions are explicitly described by the numbers which can be regarded as a generalization of the Eulerian numbers and the MacMahon numbers. We also show that similar properties hold even for the numeration systems with the negative bases.  相似文献   

12.
A continuous‐time binary‐matrix‐valued Markov chain is used to model the process by which social structure effects individual behavior. The model is developed in the context of sociometric networks of interpersonal affect. By viewing the network as a time‐dependent stochastic process it is possible to construct transition intensity equations for the probability that choices between group members will change. These equations can contain parameters for structural effects. Empirical estimates of the parameters can be interpreted as measures of structural tendencies. Some elementary processes are described and the application of the model to cross‐sectional data is explained in terms of the steady state solution to the process.  相似文献   

13.
Type II topoisomerases are enzymes that change the topology of DNA by performing strand-passage. In particular, they unknot knotted DNA very efficiently. Motivated by this experimental observation, we investigate transition probabilities between knots. We use the BFACF algorithm to generate ensembles of polygons in Z3 of fixed knot type. We introduce a novel strand-passage algorithm which generates a Markov chain in knot space. The entries of the corresponding transition probability matrix determine state-transitions in knot space and can track the evolution of different knots after repeated strand-passage events. We outline future applications of this work to DNA unknotting.  相似文献   

14.
Within the set of discrete-time Markov chains, a Markov chain is embeddable in case its transition matrix has at least one root that is a stochastic matrix. The present paper examines the embedding problem for discrete-time Markov chains with three states and with real eigenvalues. Sufficient embedding conditions are proved for diagonalizable transition matrices as well as for non-diagonalizable transition matrices and for all possible configurations regarding the sign of the eigenvalues. The embedding conditions are formulated in terms of the projections and the spectral decomposition of the transition matrix.  相似文献   

15.
The finite Markov Chain Imbedding technique has been successfully applied in various fields for finding the exact or approximate distributions of runs and patterns under independent and identically distributed or Markov dependent trials. In this paper, we derive a new recursive equation for distribution of scan statistic using the finite Markov chain imbedding technique. We also address the problem of obtaining transition probabilities of the imbedded Markov chain by introducing a notion termed Double Finite Markov Chain Imbedding where transition probabilities are obtained by using the finite Markov chain imbedding technique again. Applications for random permutation model in chemistry and coupon collector’s problem are given to illustrate our idea.  相似文献   

16.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
The main aim of this paper is to examine the applicability of generalized inverses to a wide variety of problems in applied probability where a Markov chain is present either directly or indirectly through some form of imbedding. By characterizing all generalized inverses of IP, where P is the transition matrix of a finite irreducible discrete time Markov chain, we are able to obtain general procedures for finding stationary distributions, moments of the first passage time distributions, and asymptotic forms for the moments of the occupation-time random variables. It is shown that all known explicit methods for examining these problems can be expressed in this generalized inverse framework. More generally, in the context of a Markov renewal process setting the aforementioned problems are also examined using generalized inverses of IP. As a special case, Markov chains in continuous time are considered, and we show that the generalized inverse technique can be applied directly to the infinitesimal generator of the process, instead of to IP, where P is the transition matrix of the discrete time jump Markov chain.  相似文献   

18.
In the current paper, based on progressive type-II hybrid censored samples, the maximum likelihood and Bayes estimates for the two parameter Burr XII distribution are obtained. We propose the use of expectation-maximization (EM) algorithm to compute the maximum likelihood estimates (MLEs) of model parameters. Further, we derive the asymptotic variance-covariance matrix of the MLEs by applying the missing information principle and it can be utilized to construct asymptotic confidence intervals (CIs) for the parameters. The Bayes estimates of the unknown parameters are obtained under the assumption of gamma priors by using Lindley’s approximation and Markov chain Monte Carlo (MCMC) technique. Also, MCMC samples are used to construct the highest posterior density (HPD) credible intervals. Simulation study is conducted to investigate the accuracy of the estimates and compare the performance of CIs obtained. Finally, one real data set is analyzed for illustrative purposes.  相似文献   

19.
A discrete‐time mover‐stayer (MS) model is an extension of a discrete‐time Markov chain, which assumes a simple form of population heterogeneity. The individuals in the population are either stayers, who never leave their initial states or movers who move according to a Markov chain. We, in turn, propose an extension of the MS model by specifying the stayer's probability as a logistic function of an individual's covariates. Such extension has been recently discussed for a continuous time MS but has not been considered before for a discrete time one. This extension allows for an in‐sample classification of subjects who never left their initial states into stayers or movers. The parameters of an extended MS model are estimated using the expectation‐maximization algorithm. A novel bootstrap procedure is proposed for out of sample validation of the in‐sample classification. The bootstrap procedure is also applied to validate the in‐sample classification with respect to a more general dichotomy than the MS one. The developed methods are illustrated with the data set on installment loans. But they can be applied more broadly in credit risk area, where prediction of creditworthiness of a loan borrower or lessee is of major interest.  相似文献   

20.
由于需求的不确定性,容易造成缺货,如何计算缺货概率却是一个棘手的问题.本文把由单个制造商和多个零售商构成的供应链模型成Markov过程,利用排队理论分析供应链上各状态间的转移关系,以单个制造商和三个零售商为例澄清状态转移矩阵的内部结构,并对一般模型提出计算缺货和满货概率的计算公式.在此基础上分析了各状态概率对库存总成本的影响,为系统的优化决策提供分析的依据.数值实验表明所提出的方法是行之有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号