首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文利用构造鞅的方法, 研究了Cayley树图上奇偶马氏链场的强极限定理, 给出了Cayley树图上奇偶马氏链场关于状态和状态序偶出现频率的强大数定律, 推广了一个已知结果.  相似文献   

2.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

3.
4.
本文的目的是要研究Cayley树图上奇偶马氏链场的渐近均分割性\bd 首先我们给出Cayley树 图上奇偶马氏链场关于状态和状态序偶出现频率的强大数定律, 然后证明其具有a.e.收敛性 的渐近均分割性\bd  相似文献   

5.
In this paper we consider stopping problems for continuous-time Markov chains under a general risk-sensitive optimization criterion for problems with finite and infinite time horizon. More precisely our aim is to maximize the certainty equivalent of the stopping reward minus cost over the time horizon. We derive optimality equations for the value functions and prove the existence of optimal stopping times. The exponential utility is treated as a special case. In contrast to risk-neutral stopping problems it may be optimal to stop between jumps of the Markov chain. We briefly discuss the influence of the risk sensitivity on the optimal stopping time and consider a special house selling problem as an example.  相似文献   

6.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

7.
关于非齐次m阶马氏信源的渐近均分割性   总被引:4,自引:0,他引:4  
本文研究非齐次m阶马氏信源的渐近均分割性,首先我们得到关于此种信源m 1元函数的一类强极限定理,作为推论,得到关于任意非齐次m阶马氏信源状态和熵密度的几个极限定理,最后得到一类非齐次m阶马氏信源的渐近均分割性。  相似文献   

8.
We propose the construction of a quantum Markov chain that corresponds to a “forward” quantum Markov chain. In the given construction, the quantum Markov chain is defined as the limit of finite-dimensional states depending on the boundary conditions. A similar construction is widely used in the definition of Gibbs states in classical statistical mechanics. Using this construction, we study the quantum Markov chain associated with an XY-model on a Cayley tree. For this model, within the framework of the given construction, we prove the uniqueness of the quantum Markov chain, i.e., we show that the state is independent of the boundary conditions.  相似文献   

9.
马氏环境中马氏链的Poisson极限律   总被引:19,自引:0,他引:19  
王汉兴  戴永隆 《数学学报》1997,40(2):265-270
本文研究了马氏环境中马氏链,证明了该过程于小柱集上的回返次数是渐近地服从Poisson分布的,同时还给出了该过程是(?)-混合的一个充分条件以及过程回返于小柱集之概率的一个指数估计式.  相似文献   

10.
ABSTRACT

The asymptotic equipartition property is a basic theorem in information theory. In this paper, we study the strong law of large numbers of Markov chains in single-infinite Markovian environment on countable state space. As corollary, we obtain the strong laws of large numbers for the frequencies of occurrence of states and ordered couples of states for this process. Finally, we give the asymptotic equipartition property of Markov chains in single-infinite Markovian environment on countable state space.  相似文献   

11.
In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.  相似文献   

12.
We establish a stochastic extension of Ramsey's theorem. Any Markov chain generates a filtration relative to which one may define a notion of stopping times. A stochastic colouring is any k-valued (k<∞) colour function defined on all pairs consisting of a bounded stopping time and a finite partial history of the chain truncated before this stopping time. For any bounded stopping time θ and any infinite history ω of the Markov chain, let ω|θ denote the finite partial history up to and including the time θ(ω). Given k=2, for every ?>0, we prove that there is an increasing sequence θ1<θ2<? of bounded stopping times having the property that, with probability greater than 1−?, the history ω is such that the values assigned to all pairs (ω|θi,θj), with i<j, are the same. Just as with the classical Ramsey theorem, we also obtain an analogous finitary stochastic Ramsey theorem. Furthermore, with appropriate finiteness assumptions, the time one must wait for the last stopping time (in the finitary case) is uniformly bounded, independently of the probability transitions. We generalise the results to any finite number k of colours.  相似文献   

13.
14.
In this paper, we are going to study the strong laws of large numbers for asymptotic even–odd Markov chains indexed by a homogeneous tree. First, the definition of the asymptotic even–odd Markov chain is introduced. Then the strong limit theorem for asymptotic even–odd Markov chains indexed by a homogeneous tree is established. Next, the strong laws of large numbers for the frequencies of occurrence of states and ordered couple of states for asymptotic even–odd Markov chains indexed by a homogeneous tree are obtained. Finally, we prove the asymptotic equipartition property (AEP) for these Markov chains.  相似文献   

15.
16.
We have recently developed a global optimization methodology for solving combinatorial problems with either deterministic or stochastic performance functions. This method, the Nested Partitions (NP) method has been shown to generate a Markov chain and with probability one to converge to a global optimum. In this paper, we study the rate of convergence of the method through the use of Markov Chain Monte Carlo (MCMC) methods, and use this to derive stopping rules that can be applied during simulation-based optimization. A numerical example serves to illustrate the feasibility of our approach.  相似文献   

17.
In this paper, we deal with the valuation problem of two-asset perpetual American maximum options with Markov-modulated dynamics, in which the asset price processes are driven by a hidden Markov chain. We give the optimal stopping time rule and derive explicit pricing formulas by solving a series of variational inequalities. A proof of optimality for the result is performed in the end.  相似文献   

18.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

19.
We study nonzero-sum stopping games with randomized stopping strategies. The existence of Nash equilibrium and ɛ-equilibrium strategies are discussed under various assumptions on players random payoffs and utility functions dependent on the observed discrete time Markov process. Then we will present a model of a market game in which randomized stopping times are involved. The model is a mixture of a stochastic game and stopping game. Research supported by grant PBZ-KBN-016/P03/99.  相似文献   

20.
本文研究具有不同到达率的带有启动时间的多级适应性休假M~ξ/G/1排队模型,应用嵌入马尔可夫链方法推导出了稳态队长和等待时间(先到先服务规则)分布,并验证了稳态队长和稳态等待时间具有随机分解性,而且给出了忙期分布.许多关于M~ξ/G/1的排队模型都可以看作是此模型的特例.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号