首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Kingman and Williams [6] showed that a pattern of positive elements can occur in a transition matrix of a finite state, nonhomogeneous Markov chain if and only if it may be expressed as a finite product of reflexive and transitive patterns. In this paper we solve a similar problem for doubly stochastic chains. We prove that a pattern of positive elements can occur in a transition matrix of a doubly stochastic Markov chain if and only if it may be expressed as a finite product of reflexive, transitive, and symmetric patterns. We provide an algorithm for determining whether a given pattern may be expressed as a finite product of reflexive, transitive, and symmetric patterns. This result has implications for the embedding problem for doubly stochastic Markov chains. We also give the application of the obtained characterization to the chain majorization.  相似文献   

2.
In this paper we carry over the concept of reverse probabilistic representations developed in Milstein, Schoenmakers, Spokoiny [G.N. Milstein, J.G.M. Schoenmakers, V. Spokoiny, Transition density estimation for stochastic differential equations via forward–reverse representations, Bernoulli 10 (2) (2004) 281–312] for diffusion processes, to discrete time Markov chains. We outline the construction of reverse chains in several situations and apply this to processes which are connected with jump–diffusion models and finite state Markov chains. By combining forward and reverse representations we then construct transition density estimators for chains which have root-NN accuracy in any dimension and consider some applications.  相似文献   

3.
4.
We consider time‐homogeneous Markov chains with state space Ek≡{0,1,…,k} and initial distribution concentrated on the state 0. For pairs of such Markov chains, we study the Stochastic Tail Order and the stochastic order in the usual sense between the respective first passage times in the state k . On this purpose, we will develop a method based on a specific relation between two stochastic matrices on the state space Ek . Our method provides comparisons that are simpler and more refined than those obtained by the analysis based on the spectral gaps. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Recursive equations are derived for the conditional distribution of the state of a Markov chain, given observations of a function of the state. Mainly continuous time chains are considered. The equations for the conditional distribution are given in matrix form and in differential equation form. The conditional distribution itself forms a Markov process. Special cases considered are doubly stochastic Poisson processes with a Markovian intensity, Markov chains with a random time, and Markovian approximations of semi-Markov processes. Further the results are used to compute the Radon-Nikodym derivative for two probability measures for a Markov chain, when a function of the state is observed.  相似文献   

6.
We develop some sufficient conditions for the usual stochastic ordering between hitting times, of a fixed state, for two finite Markov chains with the same state-space. Our attention will be focused on the so called skip-free case and, for the proof of our results, we develop a special type of coupling. We also analyze some special cases of applications in the frame of reliability degradation and of times to occurrence of words under random sampling of letters from a finite alphabet. As will be briefly discussed such fields give rise, in a natural way, to skip-free Markov chains.  相似文献   

7.
主要研究了树指标非齐次马氏链的广义熵遍历定理.首先证明了树指标非齐次马氏链上的二元函数延迟平均的强极限定理.然后得到了树指标非齐次马氏链上状态出现延迟频率的强大数定律,以及树指标非齐次马氏链的广义熵遍历定理.作为推论,推广了一些已有结果.同时,证明了局部有限无穷树树指标有限状态随机过程广义熵密度的一致可积性.  相似文献   

8.
We justify and discuss expressions for joint lower and upper expectations in imprecise probability trees, in terms of the sub- and supermartingales that can be associated with such trees. These imprecise probability trees can be seen as discrete-time stochastic processes with finite state sets and transition probabilities that are imprecise, in the sense that they are only known to belong to some convex closed set of probability measures. We derive various properties for their joint lower and upper expectations, and in particular a law of iterated expectations. We then focus on the special case of imprecise Markov chains, investigate their Markov and stationarity properties, and use these, by way of an example, to derive a system of non-linear equations for lower and upper expected transition and return times. Most importantly, we prove a game-theoretic version of the strong law of large numbers for submartingale differences in imprecise probability trees, and use this to derive point-wise ergodic theorems for imprecise Markov chains.  相似文献   

9.
Abstract

This article is concerned with studying the following problem: Consider a multivariate stochastic process whose law is characterized in terms of some infinitesimal characteristics, such as the infinitesimal generator in case of finite Markov chains. Under what conditions imposed on these infinitesimal characteristics of this multivariate process, the univariate components of the process agree in law with given univariate stochastic processes. Thus, in a sense, we study a stochastic processe' counterpart of the stochastic dependence problem, which in case of real valued random variables is solved in terms of Sklar's theorem.  相似文献   

10.
We study the problem of stationarity and ergodicity for autoregressive multinomial logistic time series models which possibly include a latent process and are defined by a GARCH-type recursive equation. We improve considerably upon the existing conditions about stationarity and ergodicity of those models. Proofs are based on theory developed for chains with complete connections. A useful coupling technique is employed for studying ergodicity of infinite order finite-state stochastic processes which generalize finite-state Markov chains. Furthermore, for the case of finite order Markov chains, we discuss ergodicity properties of a model which includes strongly exogenous but not necessarily bounded covariates.  相似文献   

11.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

12.
13.
This paper exposes the stochastic structure of traffic processes in a class of finite state queueing systems which are modeled in continuous time as Markov processes. The theory is presented for theM/E k /φ/L class under a wide range of queue disciplines. Particular traffic processes of interest include the arrival, input, output, departure and overflow processes. Several examples are given which demonstrate that the theory unifies many earlier works, as well as providing some new results. Several extensions to the model are discussed.  相似文献   

14.
S. Boyarchenko  S. Levendorskiĭ 《PAMM》2007,7(1):1081303-1081304
In the paper, we solve the pricing problem for American put-like options in Markov-modulated Lévy models. The early exercise boundaries and prices are calculated using a generalization of Carr's randomization for regime-switching models. An efficient iteration pricing procedure is developed. The computational time is of order m2, where m is the number of states, and of order m, if the parallel computations are allowed. The payoffs, riskless rates and class of Lévy processes may depend on a state. Special cases are stochastic volatility models and models with stochastic interest rate; both must be modelled as finite-state Markov chains. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Stochastic calculus and stochastic differential equations for Brownian motion were introduced by K. Itô in order to give a pathwise construction of diffusion processes. This calculus has deep connections with objects such as the Fock space and the Heisenberg canonical commutation relations, which have a central role in quantum physics. We review these connections, and give a brief introduction to the noncommutative extension of Itô’s stochastic integration due to Hudson and Parthasarathy. Then we apply this scheme to show how finite Markov chains can be constructed by solving stochastic differential equations, similar to diffusion equations, on the Fock space.  相似文献   

16.
树指标马氏链的等价定义   总被引:1,自引:0,他引:1  
国内外关于树指标随机过程的研究已经取得了一定的成果.Benjamini和Peres首先给出了树指标马氏链的定义.Berger和叶中行研究了齐次树图上平稳随机场熵率的存在性.杨卫国与刘文研究了树上马氏场的强大数定律与渐近均分性.杨卫国又研究了一般树指标马氏链的强大数定律.为了以后更有效的研究树指标随机过程的一系列相关问题,本文在分析研究前人成果的基础上,给出了树指标马氏链的等价定义,并用数学归纳法证明了其等价性.  相似文献   

17.
Motivated by queueing systems playing a key role in the performance evaluation of telecommunication networks, we analyze in this paper the stationary behavior of a fluid queue, when the instantaneous input rate is driven by a continuous-time Markov chain with finite or infinite state space. In the case of an infinite state space and for particular classes of Markov chains with a countable state space, such as quasi birth and death processes or Markov chains of the G/M/1 type, we develop an algorithm to compute the stationary probability distribution function of the buffer level in the fluid queue. This algorithm relies on simple recurrence relations satisfied by key characteristics of an auxiliary queueing system with normalized input rates.   相似文献   

18.
We study the question of geometric ergodicity in a class of Markov chains on the state space of non-negative integers for which, apart from a finite number of boundary rows and columns, the elements pjk of the one-step transition matrix are of the form c k-j where {c k} is a probability distribution on the set of integers. Such a process may be described as a general random walk on the non-negative integers with boundary conditions affecting transition probabilities into and out of a finite set of boundary states. The imbedded Markov chains of several non-Markovian queueing processes are special cases of this form. It is shown that there is an intimate connection between geometric ergodicity and geometric bounds on one of the tails of the distribution {c k}.This research was supported by the U.S. office of Naval Research Contract No. Nonr-855(09), and carried out while the author was a visitor in the Statistics department, University of North Carolina, Chapel Hill.  相似文献   

19.
We show how to construct a canonical choice of stochastic area for paths of reversible Markov processes satisfying a weak H?lder condition, and hence demonstrate that the sample paths of such processes are rough paths in the sense of Lyons. We further prove that certain polygonal approximations to these paths and their areas converge in p-variation norm. As a corollary of this result and standard properties of rough paths, we are able to provide a significant generalization of the classical result of Wong-Zakai on the approximation of solutions to stochastic differential equations. Our results allow us to construct solutions to differential equations driven by reversible Markov processes of finite p-variation with p<4. Received May 18, 2001 / final version received April 3, 2001?Published online April 8, 2002  相似文献   

20.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号