共查询到20条相似文献,搜索用时 29 毫秒
1.
The aim of this paper is to examine multiple Markov dependence for the discrete as well as for the continuous parameter case. In both cases the Markov property with arbitrary parameter values is investigated and it is shown that it leads to the degeneration of the multiple Markov dependence to the simple one. 相似文献
2.
This paper develops bounds on the rate of decay of powers of Markov kernels on finite state spaces. These are combined with eigenvalue estimates to give good bounds on the rate of convergence to stationarity for finite Markov chains whose underlying graph has moderate volume growth. Roughly, for such chains, order (diameter) steps are necessary and suffice to reach stationarity. We consider local Poincaré inequalities and use them to prove Nash inequalities. These are bounds onl
2-norms in terms of Dirichlet forms andl
1-norms which yield decay rates for iterates of the kernel. This method is adapted from arguments developed by a number of authors in the context of partial differential equations and, later, in the study of random walks on infinite graphs. The main results do not require reversibility. 相似文献
3.
We suggest an approach to obtaining general estimates of stability in terms of special “weighted” norms related to total variation. Two important classes of continuous-time Markov chains are considered for which it is possible to obtain exact convergence rate estimates (and hence, guarantee exact stability estimates): birth–death–catastrophes processes, and queueing models with batch arrivals and group services. 相似文献
4.
Thomas Kaijser 《数学学报(英文版)》2011,27(3):441-476
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. 相似文献
5.
Jan Maas 《Journal of Functional Analysis》2011,261(8):2250-2292
Let K be an irreducible and reversible Markov kernel on a finite set X. We construct a metric W on the set of probability measures on X and show that with respect to this metric, the law of the continuous time Markov chain evolves as the gradient flow of the entropy. This result is a discrete counterpart of the Wasserstein gradient flow interpretation of the heat flow in Rn by Jordan, Kinderlehrer and Otto (1998). The metric W is similar to, but different from, the L2-Wasserstein metric, and is defined via a discrete variant of the Benamou-Brenier formula. 相似文献
7.
给出了状态有限的单无限马氏环境中马氏链泛函加权和的强收敛性,得到了状态有限的单无限马氏环境中马氏链泛函加权和的强收敛性成立的一系列充分条件. 相似文献
8.
《Stochastic Processes and their Applications》2019,129(9):3319-3359
For Markov processes evolving on multiple time-scales a combination of large component scalings and averaging of rapid fluctuations can lead to useful limits for model approximation. A general approach to proving a law of large numbers to a deterministic limit and a central limit theorem around it have already been proven in Kang and Kurtz (2013) and Kang et al. (2014). We present here a general approach to proving a large deviation principle in path space for such multi-scale Markov processes. Motivated by models arising in systems biology, we apply these large deviation results to general chemical reaction systems which exhibit multiple time-scales, and provide explicit calculations for several relevant examples. 相似文献
9.
Apostolos N. Burnetas Michael N. Katehakis 《Mathematical Methods of Operations Research》1997,46(2):241-250
Consider a finite state irreducible Markov reward chain. It is shown that there exist simulation estimates and confidence intervals for the expected first passage times and rewards as well as the expected average reward, with 100% coverage probability. The length of the confidence intervals converges to zero with probability one as the sample size increases; it also satisfies a large deviations property. 相似文献
10.
Discrete time Markov chains with interval probabilities 总被引:1,自引:0,他引:1
Damjan kulj 《International Journal of Approximate Reasoning》2009,50(8):1314-1329
The parameters of Markov chain models are often not known precisely. Instead of ignoring this problem, a better way to cope with it is to incorporate the imprecision into the models. This has become possible with the development of models of imprecise probabilities, such as the interval probability model. In this paper we discuss some modelling approaches which range from simple probability intervals to the general interval probability models and further to the models allowing completely general convex sets of probabilities. The basic idea is that precisely known initial distributions and transition matrices are replaced by imprecise ones, which effectively means that sets of possible candidates are considered. Consequently, sets of possible results are obtained and represented using similar imprecise probability models.We first set up the model and then show how to perform calculations of the distributions corresponding to the consecutive steps of a Markov chain. We present several approaches to such calculations and compare them with respect to the accuracy of the results. Next we consider a generalisation of the concept of regularity and study the convergence of regular imprecise Markov chains. We also give some numerical examples to compare different approaches to calculations of the sets of probabilities. 相似文献
11.
Estimation of spectral gap for Markov chains 总被引:7,自引:0,他引:7
Chen Mufa 《数学学报(英文版)》1996,12(4):337-360
The study of the convergent rate (spectral gap) in theL
2-sense is motivated from several different fields: probability, statistics, mathematical physics, computer science and so on and it is now an active research topic. Based on a new approach (the coupling technique) introduced in [7] for the estimate of the convergent rate and as a continuation of [4], [5], [7–9], [23] and [24], this paper studies the estimate of the rate for time-continuous Markov chains. Two variational formulas for the rate are presented here for the first time for birth-death processes. For diffusions, similar results are presented in an accompany paper [10]. The new formulas enable us to recover or improve the main known results. The connection between the sharp estimate and the corresponding eigenfunction is explored and illustrated by various examples. A previous result on optimal Markovian couplings[4] is also extended in the paper.Research supported in part by NSFC, Qin Shi Sci & Tech. Foundation and the State Education Commission of China. 相似文献
12.
In this paper, subgeometric ergodicity is investigated for continuous-time Markov chains. Several equivalent conditions, based on the first hitting time or the drift function, are derived as the main theorem. In its corollaries, practical drift criteria are given for ?-ergodicity and computable bounds on subgeometric convergence rates are obtained for stochastically monotone Markov chains. These results are illustrated by examples. 相似文献
13.
V. S. Borkar 《Journal of Optimization Theory and Applications》1993,77(2):387-397
Milito and Cruz have introduced a novel adaptive control scheme for finite Markov chains when a finite parametrized family of possible transition matrices is available. The scheme involves the minimization of a composite functional of the observed history of the process incorporating both control and estimation aspects. We prove the a.s. optimality of a similar scheme when the state space is countable and the parameter space a compact subset ofR
d
. 相似文献
14.
Sandra Fortini Lucia Ladelli Giovanni Petris Eugenio Regazzini 《Stochastic Processes and their Applications》2002,100(1-2):147-165
Let X be a chain with discrete state space I, and V be the matrix of entries Vi,n, where Vi,n denotes the position of the process immediately after the nth visit to i. We prove that the law of X is a mixture of laws of Markov chains if and only if the distribution of V is invariant under finite permutations within rows (i.e., the Vi,n's are partially exchangeable in the sense of de Finetti). We also prove that an analogous statement holds true for mixtures of laws of Markov chains with a general state space and atomic kernels. Going back to the discrete case, we analyze the relationships between partial exchangeability of V and Markov exchangeability in the sense of Diaconis and Freedman. The main statement is that the former is stronger than the latter, but the two are equivalent under the assumption of recurrence. Combination of this equivalence with the aforesaid representation theorem gives the Diaconis and Freedman basic result for mixtures of Markov chains. 相似文献
15.
John Conlisk 《The Journal of mathematical sociology》2013,37(2-3):127-143
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains. 相似文献
16.
The isomorphism theorem of Dynkin is definitely an important tool to investigate the problems raised in terms of local times of Markov processes. This theorem concerns continuous time Markov processes. We give here an equivalent version for Markov chains. 相似文献
17.
Rajeeva L. Karandikar Vidyadhar G. Kulkarni 《Stochastic Processes and their Applications》1985,19(2):225-235
Let \s{Xn, n ? 0\s} and \s{Yn, n ? 0\s} be two stochastic processes such that Yn depends on Xn in a stationary manner, i.e. P(Yn ? A\vbXn) does not depend on n. Sufficient conditions are derived for Yn to have a limiting distribution. If Xn is a Markov chain with stationary transition probabilities and Yn = f(Xn,..., Xn+k) then Yn depends on Xn is a stationary way. Two situations are considered: (i) \s{Xn, n ? 0\s} has a limiting distribution (ii) \s{Xn, n ? 0\s} does not have a limiting distribution and exits every finite set with probability 1. Several examples are considered including that of a non-homogeneous Poisson process with periodic rate function where we obtain the limiting distribution of the interevent times. 相似文献
18.
J. B. Lasserre 《Journal of Optimization Theory and Applications》1991,71(2):407-413
Given a family of Markov chains whose transition matrices depend on a parameter vector, we given an exact formula for the gradient of the equilibrium distribution with respect to that parameter even in the case of multiple ergodic classes and transient states. This formula generalizes previous results in the ergodic case. 相似文献
19.
Karl Sigman 《Queueing Systems》1988,3(2):179-198
We present a framework for representing a queue at arrival epochs as a Harris recurrent Markov chain (HRMC). The input to the queue is a marked point process governed by a HRMC and the queue dynamics are formulated by a general recursion. Such inputs include the cases of i.i.d, regenerative, Markov modulated, Markov renewal and the output from some queues as well. Since a HRMC is regenerative, the queue inherits the regenerative structure. As examples, we consider split & match, tandem, G/G/c and more general skip forward networks. In the case of i.i.d. input, we show the existence of regeneration points for a Jackson type open network having general service and interarrivai time distributions.A revised version of the author's winning paper of the 1986 George E. Nicholson Prize (awarded by the Operations Research Society of America). 相似文献
20.
James Allen Fill 《Journal of Theoretical Probability》1992,5(1):45-70
LetX(t), 0t<, be an ergodic continuous-time Markov chain with finite or countably infinite state space. We construct astrong stationary dual chainX
* whose first hitting times yield bounds on the convergence to stationarity forX. The development follows closely the discrete-time theory of Diaconis and Fill.(2,3) However, for applicability it is important that we formulate our results in terms of infinitesimal rates, and this raises new issues. 相似文献