共查询到20条相似文献,搜索用时 165 毫秒
1.
2.
In this paper, we consider a family of Markov bridges with jumps constructed from truncated stable processes. These Markov bridges depend on a small parameter ?>0, and have fixed initial and terminal positions. We propose a new method to prove a large deviation principle for this family of bridges based on compact level sets, change of measures, duality and various global and local estimates of transition densities for truncated stable processes. 相似文献
3.
Jinwen CHEN 《Frontiers of Mathematics in China》2014,9(4):753-759
We outline an approach to investigate the limiting law of an absorbing Markov chain conditional on having not been absorbed for long time. The main idea is to employ Donsker-Varadhan's entropy functional which is typically used as the large deviation rate function for Markov processes. This approach provides an interpretation for a certain quasi-ergodicity 相似文献
4.
Elizabeth L. Wilmer 《Journal of Theoretical Probability》2003,16(3):751-770
By proving a local limit theorem for higher-order transitions, we determine the time required for necklace chains to be close to stationarity. Because necklace chains, built by arranging identical smaller Markov chains around a directed cycle, are not reversible, have little symmetry, do not have uniform stationary distributions, and can be nearly periodic, prior general bounds on rates of convergence of Markov chains either do not apply or give poor bounds. Necklace chains can serve as test cases for future techniques for bounding rates of convergence. 相似文献
5.
《Stochastic Processes and their Applications》2020,130(5):2596-2638
In this paper we introduce a new generalisation of the relative Fisher Information for Markov jump processes on a finite or countable state space, and prove an inequality which connects this object with the relative entropy and a large deviation rate functional. In addition to possessing various favourable properties, we show that this generalised Fisher Information converges to the classical Fisher Information in an appropriate limit. We then use this generalised Fisher Information and the aforementioned inequality to qualitatively study coarse-graining problems for jump processes on discrete spaces. 相似文献
6.
Arnab Ganguly 《Stochastic Processes and their Applications》2018,128(7):2179-2227
The paper concerns itself with establishing large deviation principles for a sequence of stochastic integrals and stochastic differential equations driven by general semimartingales in infinite-dimensional settings. The class of semimartingales considered is broad enough to cover Banach space-valued semimartingales and the martingale random measures. Simple usable expressions for the associated rate functions are given in this abstract setup. As illustrated through several concrete examples, the results presented here provide a new systematic approach to the study of large deviation principles for a sequence of Markov processes. 相似文献
7.
Motivated by problems arising in time-dependent queues and dynamic systems with random environment, this work develops moderate deviations principles for dynamic systems driven by a fast-varying non-homogeneous Markov chain in continuous time. A distinct feature is that the Markov chain is time dependent or inhomogeneous, so are the dynamic systems. Under irreducibility of the non-homogeneous Markov chain, moderate deviations of a non-homogeneous functional are established first. With the help of a martingale problem formulation and a functional central limit theorem for the two timescale system, both upper and lower bounds of moderate deviations are obtained for the rapidly fluctuating Markovian systems. Then applications to queueing systems and dynamic systems modulated by a fast-varying Markov chain are examined. 相似文献
8.
In this paper we establish spatial central limit theorems for a large class of supercritical branching Markov processes with general spatial-dependent branching mechanisms. These are generalizations of the spatial central limit theorems proved in [1] for branching OU processes with binary branching mechanisms. Compared with the results of [1], our central limit theorems are more satisfactory in the sense that the normal random variables in our theorems are non-degenerate. 相似文献
9.
P. Dai Pra 《纯数学与应用数学通讯》1993,46(3):387-422
We prove a process-level large deviation principle for the space-time empirical averages of continuous-time systems on an infinite lattice. Our methods rely on the Donsker-Varadhan large deviation theory for Markov processes, and allow us to express the rate function rather explicitly in terms of the Markov generator of the infinite particle system. We can prove our principle for a large class of spin systems with no particle exchange, as well as for infinite diffusion processes whose drift is the gradient of a finite range Hamiltonian. © 1993 John Wiley & Sons, Inc. 相似文献
10.
11.
We consider systems of spatially distributed branching particles in R
d
. The particle lifelengths are of general form, hence the time propagation of the system is typically not Markov. A natural time-space-mass scaling is applied to a sequence of particle systems and we derive limit results for the corresponding sequence of measure-valued processes. The limit is identified as the projection on R
d of a superprocess in R
+×R
d
. The additive functional characterizing the superprocess is the scaling limit of certain point processes, which count generations along a line of descent for the branching particles. 相似文献
12.
The asymptotic behavior of a queueing process in overloaded state-dependent queueing models (systems and networks) of a switching structure is investigated. A new approach to study fluid and diffusion approximation type theorems (without reflection) in transient and quasi-stationary regimes is suggested. The approach is based on functional limit theorems of averaging principle and diffusion approximation types for so-called Switching processes. Some classes of state-dependent Markov and non-Markov overloaded queueing systems and networks with different types of calls, batch arrival and service, unreliable servers, networks (M
SM,Q
/M
SM,Q
/1/)
r
switched by a semi-Markov environment and state-dependent polling systems are considered. 相似文献
13.
Large Deviations for Empirical Measures of Not Necessarily Irreducible Countable Markov Chains with Arbitrary Initial Measures 总被引:1,自引:0,他引:1
Yi Wen JIANG Li Ming WU 《数学学报(英文版)》2005,21(6):1377-1390
All known results on large deviations of occupation measures of Markov processes are based on the assumption of (essential) irreducibility. In this paper we establish the weak* large deviation principle of occupation measures for any countable Markov chain with arbitrary initial measures. The new rate function that we obtain is not convex and depends on the initial measure, contrary to the (essentially) irreducible case. 相似文献
14.
Daehong Kim Masayoshi Takeda Jiangang Ying 《Proceedings of the American Mathematical Society》2002,130(7):2115-2123
For symmetric continuous time Markov chains, we obtain some formulas on total occupation times and limit theorems of additive functionals by using large deviation theory.
15.
Limit theorems for functionals of classical (homogeneous) Markov renewal and semi-Markov processes have been known for a long time, since the pioneering work of Pyke Schaufele (Limit theorems for Markov renewal processes, Ann. Math. Statist., 35(4):1746–1764, 1964). Since then, these processes, as well as their time-inhomogeneous generalizations, have found many applications, for example, in finance and insurance. Unfortunately, no limit theorems have been obtained for functionals of inhomogeneous Markov renewal and semi-Markov processes as of today, to the best of the authors’ knowledge. In this article, we provide strong law of large numbers and central limit theorem results for such processes. In particular, we make an important connection of our results with the theory of ergodicity of inhomogeneous Markov chains. Finally, we provide an application to risk processes used in insurance by considering a inhomogeneous semi-Markov version of the well-known continuous-time Markov chain model, widely used in the literature. 相似文献
16.
Roy Cerqueti Paolo Falbo Gianfranco Guastaroba Cristian Pelizzari 《European Journal of Operational Research》2013
Markov chain theory is proving to be a powerful approach to bootstrap finite states processes, especially where time dependence is non linear. In this work we extend such approach to bootstrap discrete time continuous-valued processes. To this purpose we solve a minimization problem to partition the state space of a continuous-valued process into a finite number of intervals or unions of intervals (i.e. its states) and identify the time lags which provide “memory” to the process. A distance is used as objective function to stimulate the clustering of the states having similar transition probabilities. The problem of the exploding number of alternative partitions in the solution space (which grows with the number of states and the order of the Markov chain) is addressed through a Tabu Search algorithm. The method is applied to bootstrap the series of the German and Spanish electricity prices. The analysis of the results confirms the good consistency properties of the method we propose. 相似文献
17.
Ergodic control of singularly perturbed Markov chains with general state and compact action spaces is considered. A new method
is given for characterization of the limit of invariant measures, for perturbed chains, when the perturbation parameter goes
to zero. It is also demonstrated that the limit control principle is satisfied under natural ergodicity assumptions about
controlled Markov chains. These assumptions allow for the presence of transient states, a situation that has not been considered
in the literature before in the context of control of singularly perturbed Markov processes with long-run-average cost functionals.
Accepted 3 December 1996 相似文献
18.
Shulan Hu 《Stochastic Processes and their Applications》2011,121(1):61-90
In this paper, we prove the large deviation principle (LDP) for the occupation measures of not necessarily irreducible random dynamical systems driven by Markov processes. The LDP for not necessarily irreducible dynamical systems driven by i.i.d. sequence is derived. As a further application we establish the LDP for extended hidden Markov models, filling a gap in the literature, and obtain large deviation estimations for the log-likelihood process and maximum likelihood estimator of hidden Markov models. 相似文献
19.
《Stochastic Processes and their Applications》2020,130(1):328-365
This paper provides a general and abstract approach to compute invariant distributions for Feller processes. More precisely, we show that the recursive algorithm presented in Lamberton and Pagès (2002) and based on simulation algorithms of stochastic schemes with decreasing steps can be used to build invariant measures for general Feller processes. We also propose various applications: Approximation of Markov Brownian diffusion stationary regimes with a Milstein or an Euler scheme and approximation of a Markov switching Brownian diffusion stationary regimes using an Euler scheme. 相似文献
20.
Anthony G. Pakes 《Stochastic Processes and their Applications》1979,8(3):277-303
The concept of a limiting conditional age distribution of a continuous time Markov process whose state space is the set of non-negative integers and for which {0} is absorbing is defined as the weak limit as t→∞ of the last time before t an associated “return” Markov process exited from {0} conditional on the state, j, of this process at t. It is shown that this limit exists and is non-defective if the return process is ρ-recurrent and satisfies the strong ratio limit property. As a preliminary to the proof of the main results some general results are established on the representation of the ρ-invariant measure and function of a Markov process. The conditions of the main results are shown to be satisfied by the return process constructed from a Markov branching process and by birth and death processes. Finally, a number of limit theorems for the limiting age as j→∞ are given. 相似文献