首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
If a Markov chain converges rapidly to stationarity, then the time until the first hit on a rarely-visited set of states is approximately exponentially distributed; moreover an explicit bound for the error in this approximation can be given. This complements results of Keilson.  相似文献   

2.
Daniel Rudolf  Björn Sprungk 《PAMM》2017,17(1):731-734
Based on the proposed states of the Metropolis-Hastings (MH) algorithm we construct a MH Importance Sampling estimator for the approximation of expectations. The new approximation scheme is asymptotically correct and numerical experiments indicate that it can outperform the classical MH Markov chain Monte Carlo estimator. (© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
This work develops numerical approximation algorithms for solutions of stochastic differential equations with Markovian switching. The existing numerical algorithms all use a discrete-time Markov chain for the approximation of the continuous-time Markov chain. In contrast, we generate the continuous-time Markov chain directly, and then use its skeleton process in the approximation algorithm. Focusing on weak approximation, we take a re-embedding approach, and define the approximation and the solution to the switching stochastic differential equation on the same space. In our approximation, we use a sequence of independent and identically distributed (i.i.d.) random variables in lieu of the common practice of using Brownian increments. By virtue of the strong invariance principle, we ascertain rates of convergence in the pathwise sense for the weak approximation scheme.  相似文献   

4.
Abstract

We introduce the concepts of lumpability and commutativity of a continuous time discrete state space Markov process, and provide a necessary and sufficient condition for a lumpable Markov process to be commutative. Under suitable conditions we recover some of the basic quantities of the original Markov process from the jump chain of the lumped Markov process.  相似文献   

5.
We construct different classes of lumpings for a family of Markov chain products which reflect the structure of a given finite poset. We essentially use combinatorial methods. We prove that, for such a product, every lumping can be obtained from the action of a suitable subgroup of the generalized wreath product of symmetric groups, acting on the underlying poset block structure, if and only if the poset defining the Markov process is totally ordered, and one takes the uniform Markov operator in each factor state space. Finally we show that, when the state space is a homogeneous space associated with a Gelfand pair, the spectral analysis of the corresponding lumped Markov chain is completely determined by the decomposition of the group action into irreducible submodules.  相似文献   

6.
We propose the construction of a quantum Markov chain that corresponds to a “forward” quantum Markov chain. In the given construction, the quantum Markov chain is defined as the limit of finite-dimensional states depending on the boundary conditions. A similar construction is widely used in the definition of Gibbs states in classical statistical mechanics. Using this construction, we study the quantum Markov chain associated with an XY-model on a Cayley tree. For this model, within the framework of the given construction, we prove the uniqueness of the quantum Markov chain, i.e., we show that the state is independent of the boundary conditions.  相似文献   

7.
This work develops numerical approximation methods for quantile hedging involving mortality components for contingent claims in incomplete markets, in which guaranteed minimum death benefits (GMDBs) could not be perfectly hedged. A regime-switching jump-diffusion model is used to delineate the dynamic system and the hedging function for GMDBs, where the switching is represented by a continuous-time Markov chain. Using Markov chain approximation techniques, a discrete-time controlled Markov chain with two component is constructed. Under simple conditions, the convergence of the approximation to the value function is established. Examples of quantile hedging model for guaranteed minimum death benefits under linear jumps and general jumps are also presented.  相似文献   

8.
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, …  相似文献   

9.
A strongly ergodic non-homogeneous Markov chain is considered in the paper. As an analog of the Poisson limit theorem for a homogeneous Markov chain recurring to small cylindrical sets, a Poisson limit theorem is given for the non-homogeneous Markov chain. Meanwhile, some interesting results about approximation independence and probabilities of small cylindrical sets are given.  相似文献   

10.
In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.  相似文献   

11.
部分信息下均值-方差准则下的投资组合问题研究   总被引:1,自引:0,他引:1  
研究了部分信息下,投资组合效用最大化的问题.在风险资产(股票)价格满足跳扩散过程,对同时该过程中的系数受马尔科夫调制参数的影响.通过运用非线性滤波技术,将部分信息的问题转化完全信息的问题.并运用随机优化与倒向随机微分方程得到在均值-方差准则的最优投资策略.  相似文献   

12.
研究了马氏环境中的可数马氏链,主要证明了过程于小柱集上的回返次数是渐近地服从Poisson分布。为此,引入熵函数h,首先给出了马氏环境中马氏链的Shannon-Mc Millan-Breiman定理,还给出了一个非马氏过程Posson逼近的例子。当环境过程退化为一常数序列时,便得到可数马氏链的Poisson极限定理。这是有限马氏链Pitskel相应结果的拓广。  相似文献   

13.
An output feedback controller is proposed for a class of uncertain nonlinear systems preceded by unknown backlash-like hysteresis, where the hysteresis is modeled by a differential equation. The unknown nonlinear functions are approximated by fuzzy systems based on universal approximation theorem, where both the premise and the consequent parts of the fuzzy rules are tuned with adaptive schemes. The proposed approach does not need the availability of the states, which is essential in practice, and uses an observer to estimate the states. An adaptive robust structure is used to cope with lumped uncertainties generated by state estimation error, approximation error of fuzzy systems and external disturbances. Due to its adaptive structure the bound of lumped uncertainties does not need to be known and at the same time the chattering is attenuated effectively. The strictly positive real (SPR) Lyapunov synthesis approach is used to guarantee asymptotic stability of the closed-loop system. In order to show the effectiveness of the proposed method simulation results are illustrated.  相似文献   

14.
Some strong laws of large numbers for the frequcncies of occurrence of states and ordered couples of states for nonsymmetric Markov chain fields(NSMC)on Cayley trees are studied.In the proof,a new technique for the study of strong liinit theorems of Markov chains is extended to the case of Markov chain fields.The asymptotic equiparti- tion properties with almost everywhere(a.e.)convergence for NSMC on Cayley trees are obtained.  相似文献   

15.
This paper deals with the asymptotic optimality of a stochastic dynamic system driven by a singularly perturbed Markov chain with finite state space. The states of the Markov chain belong to several groups such that transitions among the states within each group occur much more frequently than transitions among the states in different groups. Aggregating the states of the Markov chain leads to a limit control problem, which is obtained by replacing the states in each group by the corresponding average distribution. The limit control problem is simpler to solve as compared with the original one. A nearly-optimal solution for the original problem is constructed by using the optimal solution to the limit problem. To demonstrate, the suggested approach of asymptotic optimal control is applied to examples of manufacturing systems of production planning.  相似文献   

16.
A new algorithm for classifying the states of a homogeneous Markov chain having finitely many states is presented, which enables the investigation of the asymptotic behavior of semi-Markov processes in which the Markov chain is embedded. An application of the algorithm to a social security problem is also presented.  相似文献   

17.
Any stationary 1-dependent Markov chain with up to four states is a 2-block factor of independent, identically distributed random variables. There is a stationary 1-dependent Markov chain with five states which is not, even though every 1-dependent renewal process is a 2-block factor.  相似文献   

18.
A Markov chain plays an important role in an interacting multiple model (IMM) algorithm which has been shown to be effective for target tracking systems. Such systems are described by a mixing of continuous states and discrete modes. The switching between system modes is governed by a Markov chain. In real world applications, this Markov chain may change or needs to be changed. Therefore, one may be concerned about a target tracking algorithm with the switching of a Markov chain. This paper concentrates on fault-tolerant algorithm design and algorithm analysis of IMM estimation with the switching of a Markov chain. Monte Carlo simulations are carried out and several conclusions are given.  相似文献   

19.
本文的目的是要研究Cayley树图上奇偶马氏链场的渐近均分割性\bd 首先我们给出Cayley树 图上奇偶马氏链场关于状态和状态序偶出现频率的强大数定律, 然后证明其具有a.e.收敛性 的渐近均分割性\bd  相似文献   

20.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号