首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
对于离散时间马氏链,转移概率矩阵P关于平稳分布可逆的条件下,给出过程L2几何收敛速率与谱隙之间的关系,并得到最优L2几何收敛速率与最优几何遍历速率的一致性.  相似文献   

2.
本文首先研究了双根树上转移矩阵为逐点转移的二阶非齐次马氏链的强极限定理,同时得到双根树上二阶非齐次马氏链的强大数定律.最后,给出了双根树上二阶非齐次马氏链几乎处处收敛意义下的Shannon-McMillan定理.  相似文献   

3.
本文将针对非齐次马氏链的转移矩阵列在Ces`aro收敛意义下,利用鞅的中心极限定理证明一个不同于Dobrushin结果的非齐次马氏链的中心极限定理。  相似文献   

4.
Google 创始人sergey Brin 和Lawrence Page 把万维网搜索算法PageRank 定义成某个非周期不可约马氏链的唯一平稳分布.本文讨论了万维网搜索算法中使用的两个重要的马氏链-maximal 不可约马氏链和minimal 不可约马氏链-收敛到平稳分布的收敛速度.结果表明,在阻尼因子α>1/2~(1/2)时,maximal 马氏链比minimal 马氏链的收敛速度快.本文也给出了minimal 马氏链k 步转移矩阵的表达式,及其平稳分布关于参数α的各阶导数和Maclaurin 级数展开.  相似文献   

5.
带随机过程的随机规划问题最优解过程的平稳性与马氏性   总被引:1,自引:0,他引:1  
证明了带随机过程的随机规划问题其最优争集中至少存在一列最优解均为可测的随机过程;且如果问题中的随机过程具有平稳性与马氏性,则此时间问题的最优解过程亦具有相应的特性。  相似文献   

6.
在文中,我们首先给出由马氏过程的一些跳跃时刻形成的简单点过程的有限维分布族弱收敛到泊松过程的相应分布族的条件,并讨论了有限维分布族弱收敛到泊松过程相应分布族的平稳马氏排队系统的话务过程,其次,我们证明了GI/M/1排队系统的离去过程的有限维分布族在重话务的情况下弱收敛到泊松过程的相应分布族。  相似文献   

7.
设X(ω)={x(t,ω), t≥0}是定义在完备概率空间(Ω,,p)上的马氏链。其状态空间1={0,1,2,…}。如不作特别声明都假定X(ω)具有标准转移矩阵,完全可分,Borel可测,状态稳定。令  相似文献   

8.
本文首先对具有平稳转移概率的有限状态整值马氏链[Xi]的和Sn给出了它的概率母函数的一般表达式。利用这一结果,对于二状态马氏链,在很一般的条件下证明了Sn的分布收敛于几何型分布和复合泊松分布的卷积,较强意义下的收敛性也是被讨论的,对于多状态链,某些特殊情形的极限分布是被给出的。  相似文献   

9.
§1 引 理 对非齐次马尔科夫过程转移概率的分析性质及非齐次可数马科夫过程样本函数的性质,人们已做了较为系统的研究。本文讨论的是非齐次可数马尔科夫过程(以下简称为马氏链)的强马氏性问题,这是马氏链基本理论的一个重要组成部分。若一个右标准马氏链可分、Borel可测且右下半连续,则称其为右正则马氏链(详见定义2.1)。本文首先指出:任何一个右标准马氏链都有右正则修正;继而,通过考察推移过程的性质,证明了:任何右正则马氏链均具有强马氏性。从而在右标准马氏链情形,本文将〔6〕第二章§4§6中过程右  相似文献   

10.
本文研究了非时齐可列马氏链当其转移概率矩阵在Cesàro意义下一致收敛时的中心极限定理的问题.利用指数等价和Grtner-Ellis定理的方法,获得了相应的中偏差结果.  相似文献   

11.
In this paper, we study the two-sided taboo limit processes that arise when a Markov chain or process is conditioned on staying in some set A for a long period of time. The taboo limit is time-homogeneous after time 0 and time-inhomogeneous before time 0. The time-reversed limit has this same qualitative structure. The precise transition structure at the taboo limit is identified in the context of discrete- and continuous-time Markov chains, as well as diffusions. In addition, we present a perfect simulation algorithm for generating exact samples from the quasi-stationary distribution of a finite-state Markov chain.  相似文献   

12.
We obtain sufficient criteria for central limit theorems (CLTs) for ergodic continuous-time Markov chains (CTMCs). We apply the results to establish CLTs for continuous-time single birth processes. Moreover, we present an explicit expression of the time average variance constant for a single birth process whenever a CLT exists. Several examples are given to illustrate these results.  相似文献   

13.
Limit theorems for functionals of classical (homogeneous) Markov renewal and semi-Markov processes have been known for a long time, since the pioneering work of Pyke Schaufele (Limit theorems for Markov renewal processes, Ann. Math. Statist., 35(4):1746–1764, 1964). Since then, these processes, as well as their time-inhomogeneous generalizations, have found many applications, for example, in finance and insurance. Unfortunately, no limit theorems have been obtained for functionals of inhomogeneous Markov renewal and semi-Markov processes as of today, to the best of the authors’ knowledge. In this article, we provide strong law of large numbers and central limit theorem results for such processes. In particular, we make an important connection of our results with the theory of ergodicity of inhomogeneous Markov chains. Finally, we provide an application to risk processes used in insurance by considering a inhomogeneous semi-Markov version of the well-known continuous-time Markov chain model, widely used in the literature.  相似文献   

14.
This paper deals with a continuous-time Markov decision process in Borel state and action spaces and with unbounded transition rates. Under history-dependent policies, the controlled process may not be Markov. The main contribution is that for such non-Markov processes we establish the Dynkin formula, which plays important roles in establishing optimality results for continuous-time Markov decision processes. We further illustrate this by showing, for a discounted continuous-time Markov decision process, the existence of a deterministic stationary optimal policy (out of the class of history-dependent policies) and characterizing the value function through the Bellman equation.  相似文献   

15.
We study the limit behaviour of a nonlinear differential equation whose solution is a superadditive generalisation of a stochastic matrix, prove convergence, and provide necessary and sufficient conditions for ergodicity. In the linear case, the solution of our differential equation is equal to the matrix exponential of an intensity matrix and can then be interpreted as the transition operator of a homogeneous continuous-time Markov chain. Similarly, in the generalised nonlinear case that we consider, the solution can be interpreted as the lower transition operator of a specific set of non-homogeneous continuous-time Markov chains, called an imprecise continuous-time Markov chain. In this context, our convergence result shows that for a fixed initial state, an imprecise continuous-time Markov chain always converges to a limiting distribution, and our ergodicity result provides a necessary and sufficient condition for this limiting distribution to be independent of the initial state.  相似文献   

16.
This work develops asymptotically optimal dividend policies to maximize the expected present value of dividends until ruin.Compound Poisson processes with regime switching are used to model the surplus and the switching(a continuous-time controlled Markov chain) represents random environment and other economic conditions.Assuming the switching to be fast varying together with suitable conditions,it is shown that the system has a limit that is an average with respect to the invariant measure of a related Markov chain.Under simple conditions,the optimal policy of the limit dividend strategy is a threshold policy.Using the optimal policy of the limit system as a guide,feedback control for the original surplus is then developed.It is demonstrated that the constructed dividend policy is asymptotically optimal.  相似文献   

17.
We extend the central limit theorem for additive functionals of a stationary, ergodic Markov chain with normal transition operator due to Gordin and Lif?ic, 1981 [A remark about a Markov process with normal transition operator, In: Third Vilnius Conference on Probability and Statistics 1, pp. 147–48] to continuous-time Markov processes with normal generators. As examples, we discuss random walks on compact commutative hypergroups as well as certain random walks on non-commutative, compact groups.  相似文献   

18.
document     
This work develops asymptotically optimal controls for discrete-time singularly perturbed Markov decision processes (MDPs) having weak and strong interactions. The focus is on finite-state-space-MDP problems. The state space of the underlying Markov chain can be decomposed into a number of recurrent classes or a number of recurrent classes and a group of transient states. Using a hierarchical control approach, continuous-time limit problems that are much simpler to handle than the original ones are derived. Based on the optimal solutions for the limit problems, nearly optimal decisions for the original problems are obtained. The asymptotic optimality of such controls is proved and the rate of convergence is provided. Infinite horizon problems are considered; both discounted costs and long-run average costs are examined.  相似文献   

19.
We study infinite horizon control of continuous-time non-linear branching processes with almost sure extinction for general (positive or negative) discount. Our main goal is to study the link between infinite horizon control of these processes and an optimization problem involving their quasi-stationary distributions and the corresponding extinction rates. More precisely, we obtain an equivalent of the value function when the discount parameter is close to the threshold where the value function becomes infinite, and we characterize the optimal Markov control in this limit. To achieve this, we present a new proof of the dynamic programming principle based upon a pseudo-Markov property for controlled jump processes. We also prove the convergence to a unique quasi-stationary distribution of non-linear branching processes controlled by a Markov control conditioned on non-extinction.  相似文献   

20.
We investigate integral-type functionals of the first hitting times for continuous-time Markov chains. Recursive formulas and drift conditions for calculating or bounding integral-type functionals are obtained. The connection between the subexponential integral-type functionals and the subexponential ergodicity is established. Moreover, these results are applied to the birth-death processes. Polynomial integral-type functionals and polynomial ergodicity are studied, and a sufficient criterion for a central limit theorem is also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号