首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
宋明珠  吴永锋 《数学杂志》2015,35(2):368-374
本文研究了马氏随机环境中马氏双链函数的强大数定律.利用将双链函数进行分段研究的方法,获得了马氏环境中马氏双链函数强大数定律成立的一个充分条件.运用该定律,推导出马氏双链从一个状态到另一个状态转移概率的极限性质,进而推广了马氏双链的极限性质.  相似文献   

2.
In sec.1, we introduce several basic concepts such as random transition function, p-m process and Markov process in random environment and give some examples to construct a random transition function from a non-homogeneous density function. In sec. 2, we construct the Markov process in random enviromment and skew product Markov process by p -m process and investigate the properties of Markov process in random environment and the original process and environment process and skew product process. In sec. 3, we give several equivalence theorems on Markov process in random environment.  相似文献   

3.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
给出了一种由局部刻画构造非时齐马尔科夫链的概率方法.以任意时刻之后第一次跳的时间为条件,证明了过程的马尔科夫性,同时得到了转移概率的递归表达式—这种做法使其概率意义得以明确呈现,并且进一步证明了强马尔科夫性等一系列性质.这为数值模拟,即以Monte Carlo生成非时齐Q-过程的随机轨道提供了严格的理论基础.  相似文献   

5.
研究树上二重非齐次马氏链随机转移概率的调和平均极限性质,作为推论,得到了树上非齐次马氏链以及非齐次马氏链上的随机转移概率调和平均极限性质.  相似文献   

6.
In this article, we introduce a class of Markov processes whose transition probability densities are defined by multifractional pseudodifferential evolution equations on compact domains with variable local dimension. The infinitesimal generators of these Markov processes are given by the trace of strongly elliptic pseudodifferential operators of variable order on such domains. The results derived provide a pseudomultifractal version of some existing special classes of multifractional Markov processes. In particular, pseudostable processes are defined on domains with variable local dimension in this framework. In the case where the local dimension of the domain and the local Hölder exponents of the transition probability densities are constant, the existing results on fractal versions of Lévy processes are recovered.  相似文献   

7.
In previous work, the embedding problem is examined within the entire set of discrete-time Markov chains. However, for several phenomena, the states of a Markov model are ordered categories and the transition matrix is state-wise monotone. The present paper investigates the embedding problem for the specific subset of state-wise monotone Markov chains. We prove necessary conditions on the transition matrix of a discrete-time Markov chain with ordered states to be embeddable in a state-wise monotone Markov chain regarding time-intervals with length 0.5: A transition matrix with a square root within the set of state-wise monotone matrices has a trace at least equal to 1.  相似文献   

8.
讨论了Markov积分半群的单调性和转移函数的单调性的等价性,并得到最小的Q半群是单调的充要条件.  相似文献   

9.
Given a killed Markov process, one can use a procedure of Ikedaet al. to revive the process at the killing times. The revived process is again a Markov process and its transition function is the minimal solution of a Markov renewal equation. In this paper we will calculate such solutions for a class of revived processes.  相似文献   

10.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

11.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

12.
Multistate transition models are increasingly used in credit risk applications as they allow us to quantify the evolution of the process among different states. If the process is Markov, analysis and prediction are substantially simpler, so analysts would like to use these models if they are applicable. In this paper, we develop a procedure for assessing the Markov hypothesis and discuss different ways of implementing the test procedure. One issue when sample size is large is that the statistical test procedures will detect even small deviations from the Markov model when these differences are not of practical interest. To address this problem, we propose an approach to formulate and test the null hypothesis of “weak non‐Markov.” The situation where the transition probabilities are heterogeneous is also examined, and approaches to accommodate this case are indicated. Simulation studies are used extensively to study the properties of the procedures, and two applications are to illustrate the results.  相似文献   

13.
本文研究了弱对称Markov积分半群的表示.利用积分的方法,获得了弱对称Markov积分半群的Kendall和Karlin-McGregor表示,推广了转移函数的Kendall和Karlin-McGregor表示.  相似文献   

14.
莫晓云  杨向群 《数学学报》2018,61(1):143-154
本文用轨道分析方法研究批量Markov到达过程(BMAP),有别于研究BMAP常用的矩阵解析方法.通过BMAP的表现(D_k,k=0,1,2,…),得到BMAP的跳跃概率,证明了BMAP的相过程是时间齐次Markov链,求出了相过程的转移概率和密度矩阵.此外,给定一个带有限状态空间的Q过程J,其跳跃点的计数过程记为N,证明了Q过程J的伴随过程X*=(N,J)是一个MAP,求出了该MAP的转移概率和表现(D_0,D_1),它们是通过密度矩阵Q来表述的.  相似文献   

15.
Within the set of discrete-time Markov chains, a Markov chain is embeddable in case its transition matrix has at least one root that is a stochastic matrix. The present paper examines the embedding problem for discrete-time Markov chains with three states and with real eigenvalues. Sufficient embedding conditions are proved for diagonalizable transition matrices as well as for non-diagonalizable transition matrices and for all possible configurations regarding the sign of the eigenvalues. The embedding conditions are formulated in terms of the projections and the spectral decomposition of the transition matrix.  相似文献   

16.
This paper develops exponential type upper bounds for scaled occupation measures of singularly perturbed Markov chains in discrete time. By considering two-time scale in the Markov chains, asymptotic analysis is carried out. The cases of the fast changing transition probability matrix is irreducible and that are divisible into l ergodic classes are examined first; the upper bounds of a sequence of scaled occupation measures are derived. Then extensions to Markov chains involving transient states and/or nonhomogeneous transition probabilities are dealt with. The results enable us to further our understanding of the underlying Markov chains and related dynamic systems, which is essential for solving many control and optimization problems.  相似文献   

17.
During the recent past, there has been a renewed interest in Markov chain for its attractive properties for analyzing real life data emerging from time series or longitudinal data in various fields. The models were proposed for fitting first or higher order Markov chains. However, there is a serious lack of realistic methods for linking covariate dependence with transition probabilities in order to analyze the factors associated with such transitions especially for higher order Markov chains. L.R. Muenz and L.V. Rubinstein [Markov models for covariate dependence of binary sequences, Biometrics 41 (1985) 91–101] employed logistic regression models to analyze the transition probabilities for a first order Markov model. The methodology is still far from generalization in terms of formulating a model for higher order Markov chains. In this study, it is aimed to provide a comprehensive covariate-dependent Markov model for higher order. The proposed model generalizes the estimation procedure for Markov models for any order. The proposed models and inference procedures are simple and the covariate dependence of the transition probabilities of any order can be examined without making the underlying model complex. An example from rainfall data is illustrated in this paper that shows the utility of the proposed model for analyzing complex real life problems. The application of the proposed method indicates that the higher order covariate dependent Markov models can be conveniently employed in a very useful manner and the results can provide in-depth insights to both the researchers and policymakers to resolve complex problems of underlying factors attributing to different types of transitions, reverse transitions and repeated transitions. The estimation and test procedures can be employed for any order of Markov model without making the theory and interpretation difficult for the common users.  相似文献   

18.
Standard tri-point transition function   总被引:1,自引:0,他引:1  
It is usually difficult to express a family of tri-point transition function (TTF) by a transition matrix as Markov processes with one parameter. In this paper, we define three kinds of connection matrixes on the states of standard tri-point transition function (STTF) and study their essential character, give a constructive method on the constant-value standard tri-point transition function and a general expression of the state-symmetric standard tri-point transition function by a sequence of the transition matrixes of special and simple Markov processes with one parameter.  相似文献   

19.
木文考虑连续时间齐次Markov链在(O,t]期间状态转移次数和从状态集A到B的转移次数.为计算平均转移次数,我们得到了某些在随机模型中极其有用的简便公式并引进了无限位相型(Phase Type)分布.  相似文献   

20.
Starting from the definitions and the properties of reinforced renewal processes and reinforced Markov renewal processes, we characterize, via exchangeability and de Finetti’s representation theorem, a prior that consists of a family of Dirichlet distributions on the space of Markov transition matrices and beta-Stacy processes on distribution functions. Then, we show that this family is conjugate and give some estimate results.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号