首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
首先通过Hadar等价变换方法将高阶隐马氏模型转换为与之等价的一阶向量值隐马氏模型,然后利用动态规划原理建立了一阶向量值隐马氏模型的Viterbi算法,最后通过高阶隐马氏模型和一阶向量值隐马氏模型之间的等价关系建立了高阶隐马氏模型基于动态规划推广的Viterbi算法.研究结果在一定程度上推广了几乎所有隐马氏模型文献中所涉及到的解码问题的Viterbi算法,从而进一步丰富和发展了高阶隐马氏模型的算法理论.  相似文献   

2.
高小燕 《大学数学》2013,29(1):38-42
研究了一类非齐次马氏链———渐近循环马氏链泛函的强大数定律,首先引出了渐近循环马氏链的概念,然后给出了若干引理.利用了渐近循环马氏链关于状态序偶出现频率的强大数定理给出并证明了关于渐近循环马氏链泛函的强大数定律,所得定理作为推论可得到已有的结果.  相似文献   

3.
研究了马氏环境中的可数马氏链,主要证明了过程于小柱集上的回返次数是渐近地服从Poisson分布。为此,引入熵函数h,首先给出了马氏环境中马氏链的Shannon-Mc Millan-Breiman定理,还给出了一个非马氏过程Posson逼近的例子。当环境过程退化为一常数序列时,便得到可数马氏链的Poisson极限定理。这是有限马氏链Pitskel相应结果的拓广。  相似文献   

4.
This paper shows how lumping in Markov chains can be extended to Markov set-chains. The criteria required for lumping in Markov set-chains is less restrictive than that of Markov chains.  相似文献   

5.
状态可数的马氏环境中马氏链函数的强大数定律   总被引:3,自引:0,他引:3  
李应求 《数学杂志》2003,23(4):484-490
讨论了马氏双链与随机环境中马氏链的关系.在此基础上,研究了具有离散参量的马氏环境中马氏链函数的强大数定律,并且给出了直接加于链和过程样本函数上的充分条件.  相似文献   

6.
Markov properties and strong Markov properties for random fields are defined and discussed. Special attention is given to those defined by I. V. Evstigneev. The strong Markov nature of Markov random fields with respect to random domains such as [0, L], where L is a multidimensional extension of a stopping time, is explored. A special case of this extension is shown to generalize a result of Merzbach and Nualart for point processes. As an additional example, Evstigneev's Markov and strong Markov properties are considered for independent increment jump processes.  相似文献   

7.
Abstract

Versions of the Gibbs Sampler are derived for the analysis of data from hidden Markov chains and hidden Markov random fields. The principal new development is to use the pseudolikelihood function associated with the underlying Markov process in place of the likelihood, which is intractable in the case of a Markov random field, in the simulation step for the parameters in the Markov process. Theoretical aspects are discussed and a numerical study is reported.  相似文献   

8.
郭明乐  任永 《数学杂志》2006,26(4):441-445
本文研究了双无限环境中马氏链,构造了一马氏双链.利用马氏链的理论,在双链平稳遍历的条件下,获得了双无限环境中马氏链的中心极限定理成立的充分条件.  相似文献   

9.
讨论了具有离散参数的马氏环境中马氏链的性质,建立了马氏环境中马氏链泛函的中心极限定理.同时给出了加在链和过程样本函数上的充分条件.  相似文献   

10.
引用马氏链绝对平均强遍历的概念,首先给出齐次马氏链绝对平均强遍历与强遍历的等价性,其次通过引进另一个强遍历的非齐次马氏链,给出一个非齐次马氏链绝对平均强遍历的充分条件.  相似文献   

11.
双无限环境中马氏链的强大数定律   总被引:2,自引:0,他引:2  
郭明乐 《应用数学》2005,18(1):174-180
在随机环境中马氏链的研究领域 ,构造了一时齐的马氏双链 ,讨论了它的存在性及基本性质 ,最后利用马氏双链的性质 ,得到了双无限环境中马氏链的函数极限定律 ,并给出了该链的函数强大数定律成立的两个充分条件  相似文献   

12.
§1 状态分类 定义1.1 设I是非负整数集,P={P_(ij)(s,t)|i,j∈I,α≤s≤t≤b}是转移函数矩阵。称P对i在t右标准,若limp_(ii)(t,t+h)=1;称P对i在t左标准,若limP_(ii)(t-h,t)=1.若P对i在t同时为右标准的和左标准的,则称P对i在t标准。若P对i在每个t标准,则称P对i标准。P对i右标准或左标准与此类似。若P对每个i标准,则称P标准。P右标准或左标准与此类似(参看[5]、[6])。  相似文献   

13.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

14.
贾兆丽 《大学数学》2013,29(1):22-24
讨论了具有离散参数的绕积马氏链的中心极限定理,给出了加在过程样本函数上充分条件。得到了绕积马氏链的中心极限定理成立的充分条件.  相似文献   

15.
In this paper, a differential-inclusion-based MPC scheme is developed for the controller design for a discrete time nonlinear Markov jump system with nonhomogeneous transition probability. By adopting a differential-inclusion-based convex model predictive control mechanism, the nonlinear Markov jump system with nonhomogeneous transition probability is enclosed by a set of linear Markov jump systems. In this way, the controller design for the nonlinear Markov jump system can be solved via solving a set of linear Markov jump systems. Two numerical examples with different weighting parameters R are presented to illustrate the applicability of the results obtained.  相似文献   

16.
莫晓云 《经济数学》2010,27(3):28-34
在客户发展关系的Markov链模型的基础上,构建了企业的客户回报随机过程.证明了:在适当假设下,客户回报过程是Markov链。甚至是时间齐次的Markov链.本文求出了该链的转移概率.通过转移概率得到了客户给企业期望回报的一些计算公式,从而为企业选定发展客户关系策略提供了有效的量化基础.  相似文献   

17.
Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun [11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices.  相似文献   

18.
在不指定时间序列结构的情况下,我们的分布模型是基于多变量离散时间的相应马尔可夫族和相关变量一维的边际分布.这样的模型可以同时处理时间序列之间的相互依赖和每个时间序列沿时间方向的依赖.具体的参数copula被指定为倾斜-t. 倾斜-t Copla能够处理不对称,偏斜和粗尾的数据分布.三个股票指数日均收益的实证研究表明,倾斜-t copula的马尔可夫模型要比以下模型更好:倾斜正态Copula马可夫, t-copula马可夫, 倾斜-t copula但无马尔可夫特性.  相似文献   

19.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

20.
The limit distribution for homogeneous Markov processes is studied extensively and well understood, but it is not the case for inhomogeneous Markov processes. In this paper, we review some recent results on inhomogeneous Markov processes generated by non-autonomous stochastic (partial) differential equations (SDE in short). Under some suitable conditions, we show that the distribution of recurrent solutions of SDEs constitutes the limit distribution of the corresponding inhomogeneous Markov processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号