首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
In this article, we discuss subgeometric ergodicity of a class of regime-switching diffusion processes. We derive conditions on the drift and diffusion coefficients, and the switching mechanism which result in subgeometric ergodicity of the corresponding semigroup with respect to the total variation distance as well as a class of Wasserstein distances. At the end, subgeometric ergodicity of certain classes of regime-switching Markov processes with jumps is also discussed.  相似文献   

2.
We provide a condition in terms of a supermartingale property for a functional of the Markov process, which implies (a) ff-ergodicity of strong Markov processes at a subgeometric rate, and (b) a moderate deviation principle for an integral (bounded) functional. An equivalent condition in terms of a drift inequality on the extended generator is also given. Results related to (f,r)(f,r)-regularity of the process, of some skeleton chains and of the resolvent chain are also derived. Applications to specific processes are considered, including elliptic stochastic differential equations, Langevin diffusions, hypoelliptic stochastic damping Hamiltonian systems and storage models.  相似文献   

3.
We consider a form of state-dependent drift condition for a general Markov chain, whereby the chain subsampled at some deterministic time satisfies a geometric Foster–Lyapunov condition. We present sufficient criteria for such a drift condition to exist, and use these to partially answer a question posed in Connor and Kendall (2007) [2] concerning the existence of so-called ‘tame’ Markov chains. Furthermore, we show that our ‘subsampled drift condition’ implies the existence of finite moments for the return time to a small set.  相似文献   

4.
Results of Samuels and Wendel for the simple random walk with drift on the integers which assert independence of interarrival times at the sets {a?r, a+r}, r=1, 2…, k and the arrival position in the set {a?k,a+k}, where a is the starting point, are reobtained by treating the walk as a Markov chain, and considering related chains conditional on absorption at a specified barrier.  相似文献   

5.
We consider a class of continuous time Markov chains on ? d . These chains are the discrete space analogue of Markov processes with jumps. Under some conditions, as we show, harmonic functions associated with these Markov chains are Hölder continuous.  相似文献   

6.
In this paper, we prove that the Foster–Lyapunov drift condition is necessary and sufficient for recurrence of a Markov chain on a general state space. And then an answer in the affirmative for the question presented in the monograph of Meyn and Tweedie (Markov chains and stochastic stability. Springer, Berlin, 1993, P175) is obtained.  相似文献   

7.
The problem of multivariate information analysis is considered. First, the interaction information in each dimension is defined analogously according to McGill [4] and then applied to Markov chains. The property of interaction information zero deeply relates to a certain class of weakly dependent random variables. For homogeneous, recurrent Markov chains with m states, mn ≥3, the zero criterion of n-dimensional interaction information is achieved only by (n ? 2)-dependent Markov chains, which are generated by some nilpotent matrices. Further for Gaussian Markov chains, it gives the decomposition rule of the variables into mutually correlated subchains.  相似文献   

8.
General characterizations of ergodic Markov chains have been developed in considerable detail. In this paper, we study the transience for discrete-time Markov chains on general state spaces, including the geometric transience and algebraic transience. Criteria are presented through bounding the modified moment of the first return time and establishing the appropriate drift condition. Moreover, we apply the criteria to the random walk on the half line and the skip-free chain on nonnegative integers.  相似文献   

9.
This paper develops exponential type upper bounds for scaled occupation measures of singularly perturbed Markov chains in discrete time. By considering two-time scale in the Markov chains, asymptotic analysis is carried out. The cases of the fast changing transition probability matrix is irreducible and that are divisible into l ergodic classes are examined first; the upper bounds of a sequence of scaled occupation measures are derived. Then extensions to Markov chains involving transient states and/or nonhomogeneous transition probabilities are dealt with. The results enable us to further our understanding of the underlying Markov chains and related dynamic systems, which is essential for solving many control and optimization problems.  相似文献   

10.
An eigentime identity is proved for transient symmetrizable Markov chains. For general Markov chains, if the trace of Green matrix is finite, then the expectation of first leap time is uniformly bounded, both of which are proved to be equivalent for single birth processes. For birth-death processes, the explicit formulas are presented. As an application, we give the bounds of exponential convergence rates of (sub-) Markov semigroup Pt from l to l.  相似文献   

11.
A careful look at rough path topology applied to Brownian motion reveals new possible properties of the well-known Lévy area, in particular the presence of an intrinsic drift of this area. Using renormalization limit of Markov chains on periodic graphs, we present a construction of such a non-trivial drift and give an explicit formula for it. Several examples with explicit computations are included.  相似文献   

12.
Stochastic networks with time varying arrival and service rates and routing structure are studied. Time variations are governed by, in addition to the state of the system, two independent finite state Markov processes X and Y. The transition times of X are significantly smaller than typical inter-arrival and processing times whereas the reverse is true for the Markov process Y. By introducing a suitable scaling parameter one can model such a system using a hierarchy of time scales. Diffusion approximations for such multiscale systems are established under a suitable heavy traffic condition. In particular, it is shown that, under certain conditions, properly normalized buffer content processes converge weakly to a reflected diffusion. The drift and diffusion coefficients of this limit model are functions of the state process, the invariant distribution of X, and a finite state Markov process which is independent of the driving Brownian motion.  相似文献   

13.
For continuous-time Markov chains, we provide criteria for non-ergodicity, non-algebraic ergodicity, non-exponential ergodicity, and non-strong ergodicity. For discrete-time Markov chains, criteria for non-ergodicity, non-algebraic ergodicity, and non-strong ergodicity are given. Our criteria are in terms of the existence of solutions to inequalities involving the Q-matrix (or transition matrix P in time-discrete case) of the chain. Meanwhile, these practical criteria are applied to some examples, including a special class of single birth processes and several multi-dimensional models.  相似文献   

14.
This paper considers two-person zero-sum Markov games with finitely many states and actions with the criterion of average reward per unit time. Two special situations are treated and it is shown that in both cases the method of successive approximations yields anε-band for the value of the game as well as stationaryε-optimal strategies. In the first case all underlying Markov chains of pure stationary optimal strategies are assumed to be unichained. In the second case it is assumed that the functional equation Uv=v+ge has a solution.  相似文献   

15.
For a Markov transition kernel P and a probability distribution μ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel $P_{\mu} = \sum_k \mu(k)P^k.$ In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker’s and Metropolis algorithms in terms of asymptotic variance.  相似文献   

16.
We show that the original classic randomized algorithms for approximate counting in NP-hard problems, like for counting the number of satisfiability assignments in a SAT problem, counting the number of feasible colorings in a graph and calculating the permanent, typically fail. They either do not converge at all or are heavily biased (converge to a local extremum). Exceptions are convex counting problems, like estimating the volume of a convex polytope. We also show how their performance could be dramatically improved by combining them with the classic splitting method, which is based on simulating simultaneously multiple Markov chains. We present several algorithms of the combined version, which we simple call the splitting algorithms. We show that the most advance splitting version coincides with the cloning algorithm suggested earlier by the author. As compared to the randomized algorithms, the proposed splitting algorithms require very little warm-up time while running the MCMC from iteration to iteration, since the underlying Markov chains are already in steady-state from the beginning. What required is only fine tuning, i.e. keeping the Markov chains in steady-state while moving from iteration to iteration. We present extensive simulation studies with both the splitting and randomized algorithms for different NP-hard counting problems.  相似文献   

17.
In this paper, we develop an algorithmic method for the evaluation of the steady state probability vector of a special class of finite state Markov chains. For the class of Markov chains considered here, it is assumed that the matrix associated with the set of linear equations for the steady state probabilities possess a special structure, such that it can be rearranged and decomposed as a sum of two matrices, one lower triangular with nonzero diagonal elements, and the other an upper triangular matrix with only very few nonzero columns. Almost all Markov chain models of queueing systems with finite source and/or finite capacity and first-come-first-served or head of the line nonpreemptive priority service discipline belongs to this special class.  相似文献   

18.
Consider a continuous time Markov chain with stationary transition probabilities. A function of the state is observed. A regular conditional probability distribution for the trajectory of the chain, given observations up to time t, is obtained. This distribution also corresponds to a Markov chain, but the conditional chain has nonstationary transition probabilities. In particular, computation of the conditional distribution of the state at time s is discussed. For s > t, we have prediction (extrapolation), while s < t corresponds to smoothing (interpolation). Equations for the conditional state distribution are given on matrix form and as recursive differential equations with varying s or t. These differential equations are closely related to Kolmogorov's forward and backward equations. Markov chains with one observed and one unobserved component are treated as a special case. In an example, the conditional distribution of the change-point is derived for a Poisson process with a changing intensity, given observations of the Poisson process.  相似文献   

19.
Performance evaluation of complex systems is a critical issue and bounds computation provides confidence about service quality, reliability, etc. of such systems. The stochastic ordering theory has generated a lot of works on bounds computation. Maximal lower and minimal upper bounds of a Markov chain by a st-monotone one exist and can be efficiently computed. In the present work, we extend simultaneously this last result in two directions. On the one hand, we handle the case of a maximal monotone lower bound of a family of Markov chains where the coefficients are given by numerical intervals. On the other hand, these chains are sub-chains associated to sub-stochastic matrices. We prove the existence of this maximal bound and we provide polynomial time algorithms to compute it both for discrete and continuous Markov chains. Moreover, it appears that the bounding sub-chain of a family of strictly sub-stochastic ones is not necessarily strictly sub-stochastic. We establish a characterization of the families of sub-chains for which these bounds are strictly sub-stochastic. Finally, we show how to apply these results to a classical model of repairable system. A forthcoming paper will present detailed numerical results and comparison with other methods.  相似文献   

20.
We present a Markov chain Monte Carlo (MCMC) method for generating Markov chains using Markov bases for conditional independence models for a four-way contingency table. We then describe a Markov basis characterized by Markov properties associated with a given conditional independence model and show how to use the Markov basis to generate random tables of a Markov chain. The estimates of exact p-values can be obtained from random tables generated by the MCMC method. Numerical experiments examine the performance of the proposed MCMC method in comparison with the χ 2 approximation using large sparse contingency tables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号