首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Discrete time Markov chains with interval probabilities   总被引:1,自引:0,他引:1  
The parameters of Markov chain models are often not known precisely. Instead of ignoring this problem, a better way to cope with it is to incorporate the imprecision into the models. This has become possible with the development of models of imprecise probabilities, such as the interval probability model. In this paper we discuss some modelling approaches which range from simple probability intervals to the general interval probability models and further to the models allowing completely general convex sets of probabilities. The basic idea is that precisely known initial distributions and transition matrices are replaced by imprecise ones, which effectively means that sets of possible candidates are considered. Consequently, sets of possible results are obtained and represented using similar imprecise probability models.We first set up the model and then show how to perform calculations of the distributions corresponding to the consecutive steps of a Markov chain. We present several approaches to such calculations and compare them with respect to the accuracy of the results. Next we consider a generalisation of the concept of regularity and study the convergence of regular imprecise Markov chains. We also give some numerical examples to compare different approaches to calculations of the sets of probabilities.  相似文献   

2.
During the recent past, there has been a renewed interest in Markov chain for its attractive properties for analyzing real life data emerging from time series or longitudinal data in various fields. The models were proposed for fitting first or higher order Markov chains. However, there is a serious lack of realistic methods for linking covariate dependence with transition probabilities in order to analyze the factors associated with such transitions especially for higher order Markov chains. L.R. Muenz and L.V. Rubinstein [Markov models for covariate dependence of binary sequences, Biometrics 41 (1985) 91–101] employed logistic regression models to analyze the transition probabilities for a first order Markov model. The methodology is still far from generalization in terms of formulating a model for higher order Markov chains. In this study, it is aimed to provide a comprehensive covariate-dependent Markov model for higher order. The proposed model generalizes the estimation procedure for Markov models for any order. The proposed models and inference procedures are simple and the covariate dependence of the transition probabilities of any order can be examined without making the underlying model complex. An example from rainfall data is illustrated in this paper that shows the utility of the proposed model for analyzing complex real life problems. The application of the proposed method indicates that the higher order covariate dependent Markov models can be conveniently employed in a very useful manner and the results can provide in-depth insights to both the researchers and policymakers to resolve complex problems of underlying factors attributing to different types of transitions, reverse transitions and repeated transitions. The estimation and test procedures can be employed for any order of Markov model without making the theory and interpretation difficult for the common users.  相似文献   

3.
We justify and discuss expressions for joint lower and upper expectations in imprecise probability trees, in terms of the sub- and supermartingales that can be associated with such trees. These imprecise probability trees can be seen as discrete-time stochastic processes with finite state sets and transition probabilities that are imprecise, in the sense that they are only known to belong to some convex closed set of probability measures. We derive various properties for their joint lower and upper expectations, and in particular a law of iterated expectations. We then focus on the special case of imprecise Markov chains, investigate their Markov and stationarity properties, and use these, by way of an example, to derive a system of non-linear equations for lower and upper expected transition and return times. Most importantly, we prove a game-theoretic version of the strong law of large numbers for submartingale differences in imprecise probability trees, and use this to derive point-wise ergodic theorems for imprecise Markov chains.  相似文献   

4.
We consider convergence of Markov chains with uncertain parameters, known as imprecise Markov chains, which contain an absorbing state. We prove that under conditioning on non-absorption the imprecise conditional probabilities converge independently of the initial imprecise probability distribution if some regularity conditions are assumed. This is a generalisation of a known result from the classical theory of Markov chains by Darroch and Seneta [6].  相似文献   

5.
从p—m链到随机环境中的马氏链   总被引:1,自引:0,他引:1  
第一节引进了p一m链的概念,并用之构造了与它相应的随机环境中的马氏链和绕积马氏链、第二节引进了一系列与随机环境中的马氏链相关的概率特性函数,并得到了这些函数之间的一系列关系.这些结果是经典马氏链的相应结果的一般化,它们在随机环境中的马氏链的极限理论的研究中是很有用的。  相似文献   

6.
Chain graph (CG) is a general model of graphical Markov models. Some different chain graphs may describe the same conditional independence structure, then we say that these CGs are Markov equivalent. In 1990 Frydenberg showed that every class of Markov equivalent CGs has a CG which is called the largest chain graph with the greatest number of lines. This paper presents an efficient algorithm for finding the largest chain graph of the corresponding Markov equivalent class of a given CG. The computational complexity of the algorithm is O(n3). It is more efficient than the complexity O(n!) of the present algorithms. Also a more intuitive graphical characterization of the largest chain graph is provided based on the algorithm in this paper.  相似文献   

7.
We study the problem of stationarity and ergodicity for autoregressive multinomial logistic time series models which possibly include a latent process and are defined by a GARCH-type recursive equation. We improve considerably upon the existing conditions about stationarity and ergodicity of those models. Proofs are based on theory developed for chains with complete connections. A useful coupling technique is employed for studying ergodicity of infinite order finite-state stochastic processes which generalize finite-state Markov chains. Furthermore, for the case of finite order Markov chains, we discuss ergodicity properties of a model which includes strongly exogenous but not necessarily bounded covariates.  相似文献   

8.
In previous work, the embedding problem is examined within the entire set of discrete-time Markov chains. However, for several phenomena, the states of a Markov model are ordered categories and the transition matrix is state-wise monotone. The present paper investigates the embedding problem for the specific subset of state-wise monotone Markov chains. We prove necessary conditions on the transition matrix of a discrete-time Markov chain with ordered states to be embeddable in a state-wise monotone Markov chain regarding time-intervals with length 0.5: A transition matrix with a square root within the set of state-wise monotone matrices has a trace at least equal to 1.  相似文献   

9.
This paper develops exponential type upper bounds for scaled occupation measures of singularly perturbed Markov chains in discrete time. By considering two-time scale in the Markov chains, asymptotic analysis is carried out. The cases of the fast changing transition probability matrix is irreducible and that are divisible into l ergodic classes are examined first; the upper bounds of a sequence of scaled occupation measures are derived. Then extensions to Markov chains involving transient states and/or nonhomogeneous transition probabilities are dealt with. The results enable us to further our understanding of the underlying Markov chains and related dynamic systems, which is essential for solving many control and optimization problems.  相似文献   

10.
Information theoretic methods are used to prove convergence in information divergence of reversible Markov chains. Also some ergodic theorems for information divergence are proved.   相似文献   

11.
We describe an extension of the hidden Markov model in which the manifest process conditionally follows a partition model. The assumption of local independence for the manifest random variable is thus relaxed to arbitrary dependence. The proposed class generalizes different existing models for discrete and continuous time series, and allows for the finest trading off between bias and variance. The models are fit through an EM algorithm, with the usual recursions for hidden Markov models extended at no additional computational cost.  相似文献   

12.
In the following article, we investigate a particle filter for approximating Feynman–Kac models with indicator potentials and we use this algorithm within Markov chain Monte Carlo (MCMC) to learn static parameters of the model. Examples of such models include approximate Bayesian computation (ABC) posteriors associated with hidden Markov models (HMMs) or rare-event problems. Such models require the use of advanced particle filter or MCMC algorithms to perform estimation. One of the drawbacks of existing particle filters is that they may “collapse,” in that the algorithm may terminate early, due to the indicator potentials. In this article, using a newly developed special case of the locally adaptive particle filter, we use an algorithm that can deal with this latter problem, while introducing a random cost per-time step. In particular, we show how this algorithm can be used within MCMC, using particle MCMC. It is established that, when not taking into account computational time, when the new MCMC algorithm is applied to a simplified model it has a lower asymptotic variance in comparison to a standard particle MCMC algorithm. Numerical examples are presented for ABC approximations of HMMs.  相似文献   

13.
??An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

14.
We use a non-Markovian coupling and small modifications of techniques from the theory of finite Markov chains to analyze some Markov chains on continuous state spaces. The first is a generalization of a sampler introduced by Randall and Winkler, and the second a Gibbs sampler on narrow contingency tables.  相似文献   

15.
Mixing time quantifies the convergence speed of a Markov chain to the stationary distribution. It is an important quantity related to the performance of MCMC sampling. It is known that the mixing time of a reversible chain can be significantly improved by lifting, resulting in an irreversible chain, while changing the topology of the chain. We supplement this result by showing that if the connectivity graph of a Markov chain is a cycle, then there is an Ω(n2) lower bound for the mixing time. This is the same order of magnitude that is known for reversible chains on the cycle.  相似文献   

16.
The use of Gibbs samplers driven by improper posteriors has been a controversial issue in the statistics literature over the last few years. It has recently been demonstrated that it is possible to make valid statistical inferences through such Gibbs samplers. Furthermore, theoretical and empirical evidence has been given to support the idea that there are actually computational advantages to using these nonpositive recurrent Markov chains rather than more standard positive recurrent chains. These results provide motivation for a general study of the behavior of the Gibbs Markov chain when it is not positive recurrent. This article concerns stability relationships among the two-variable Gibbs sampler and its subchains. We show that these three Markov chains always share the same stability; that is, they are either all positive recurrent, all null recurrent, or all transient. In addition, we establish general results concerning the ways in which positive recurrent Markov chains can arise from null recurrent and transient Gibbs chains. Six examples of varying complexity are used to illustrate the results.  相似文献   

17.
基于HMM的CpG岛位置判别   总被引:1,自引:0,他引:1  
隐马尔科夫过程是20世纪70年代提出来的一种统计方法,以前主要用于语音识别,1989年Churchill将其引入计算生物学,目前HMM是生物信息学中应用比较广泛的统计方法。本文对马尔科夫过程和HMM进行了简明扼要的描述,并对其在CpG岛位置判别中的应用做了概括介绍。  相似文献   

18.
This work develops asymptotic expansions for solutions of systems of backward equations of time- inhomogeneous Maxkov chains in continuous time. Owing to the rapid progress in technology and the increasing complexity in modeling, the underlying Maxkov chains often have large state spaces, which make the computa- tional tasks ihfeasible. To reduce the complexity, two-time-scale formulations are used. By introducing a small parameter ε〉 0 and using suitable decomposition and aggregation procedures, it is formulated as a singular perturbation problem. Both Markov chains having recurrent states only and Maxkov chains including also tran- sient states are treated. Under certain weak irreducibility and smoothness conditions of the generators, the desired asymptotic expansions axe constructed. Then error bounds are obtained.  相似文献   

19.
We generalize the decomposition method of the finite Markov chains for Poincaré inequality in Jerrum et al. (Ann. Appl. Probab., 14, 1741-1765 (2004)) to the reversible continuous-time Markov chains. And inductively, we give the lower bound of spectral gap for the ergodic open Jackson network by the decomposition method and the symmetrization procedure. The upper bound of the spectral gap is also presented.  相似文献   

20.
Multi-dimensional asymptotically quasi-Toeplitz Markov chains with discrete and continuous time are introduced. Ergodicity and non-ergodicity conditions are proven. Numerically stable algorithm to calculate the stationary distribution is presented. An application of such chains in retrial queueing models with Batch Markovian Arrival Process is briefly illustrated. AMS Subject Classifications Primary 60K25 · 60K20  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号