首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
A measure of the “mixing time” or “time to stationarity” in a finite irreducible discrete time Markov chain is considered. The statistic , where {πj} is the stationary distribution and mij is the mean first passage time from state i to state j of the Markov chain, is shown to be independent of the initial state i (so that ηi = η for all i), is minimal in the case of a periodic chain, yet can be arbitrarily large in a variety of situations. An application considering the effects perturbations of the transition probabilities have on the stationary distributions of Markov chains leads to a new bound, involving η, for the 1-norm of the difference between the stationary probability vectors of the original and the perturbed chain. When η is large the stationary distribution of the Markov chain is very sensitive to perturbations of the transition probabilities.  相似文献   

2.
We study the question of geometric ergodicity in a class of Markov chains on the state space of non-negative integers for which, apart from a finite number of boundary rows and columns, the elements pjk of the one-step transition matrix are of the form c k-j where {c k} is a probability distribution on the set of integers. Such a process may be described as a general random walk on the non-negative integers with boundary conditions affecting transition probabilities into and out of a finite set of boundary states. The imbedded Markov chains of several non-Markovian queueing processes are special cases of this form. It is shown that there is an intimate connection between geometric ergodicity and geometric bounds on one of the tails of the distribution {c k}.This research was supported by the U.S. office of Naval Research Contract No. Nonr-855(09), and carried out while the author was a visitor in the Statistics department, University of North Carolina, Chapel Hill.  相似文献   

3.
The paper deals with non asymptotic computable bounds for the geometric convergence rate of homogeneous ergodic Markov processes. Some sufficient conditions are stated for simultaneous geometric ergodicity of Markov chain classes. This property is applied to nonparametric estimation in ergodic diffusion processes.  相似文献   

4.
Let (Xn) be a positive recurrent Harris chain on a general state space, with invariant probability measure π. We give necessary and sufficient conditions for the geometric convergence of λPnf towards its limit π(f), and show that when such convergence happens it is, in fact, uniform over f and in L1(π)-norm. As a corollary we obtain that, when (Xn) is geometrically ergodic, ∝ π(dx)6Pn(x,·)-π6 converges to zero geometrically fast. We also characterize the geometric ergodicity of (Xn) in terms of hitting time distributions. We show that here the so-called small sets act like individual points of a countable state space chain. We give a test function criterion for geometric ergodicity and apply it to random walks on the positive half line. We apply these results to non-singular renewal processes on [0,∞) providing a probabilistic approach to the exponencial convergence of renewal measures.  相似文献   

5.
Discrete Markov chains are applied widely for analysis and design of high speed ATM networks due to their essentially discrete nature. Unfortunately, their use is precluded for many important problems due to explosion of the state space cardinality. In this paper we propose a new method for approximation of a discrete Markov chain by a chain of considerably smaller dimension which is based on the duality theory of optimization. A novel feature of our approach is that it provides guaranteed upper and lower bounds for the performance indices defined on the steady state distribution of the original system. We apply our method to the problem of engineering multiplexers for ATM networks.  相似文献   

6.
We derive sufficient conditions for ∝ λ (dx)6Pn(x, ·) - π6 to be of order o(ψ(n)-1), where Pn (x, A) are the transition probabilities of an aperiodic Harris recurrent Markov chain, π is the invariant probability measure, λ an initial distribution and ψ belongs to a suitable class of non-decreasing sequences. The basic condition involved is the ergodicity of order ψ, which in a countable state space is equivalent to Σ ψ(n)Pii?n} <∞ for some i, where τi is the hitting time of the tate i. We also show that for a general Markov chain to be ergodic of order ψ it suffices that a corresponding condition is satisfied by a small set.We apply these results to non-singular renewal measures on R providing a probabilisite method to estimate the right tail of the renewal measure when the increment distribution F satisfies ∝ tF(dt) 0; > 0 and ∝ ψ(t)(1- F(t))dt< ∞.  相似文献   

7.
Summary The standard perturbation theory for linear equations states that nearly uncoupled Markov chains (NUMCs) are very sensitive to small changes in the elements. Indeed, some algorithms, such as standard Gaussian elimination, will obtain poor results for such problems. A structured perturbation theory is given that shows that NUMCs usually lead to well conditioned problems. It is shown that with appropriate stopping, criteria, iterative aggregation/disaggregation algorithms will achieve these structured error bounds. A variant of Gaussian elimination due to Grassman, Taksar and Heyman was recently shown by O'Cinneide to achieve such bounds.Supported by the National Science Foundation under grant CCR-9000526 and its renewal, grant CCR-9201692. This research was done in part, during the author's visit to the Institute for Mathematics and its Applications, 514 Vincent Hall, 206 Church St. S.E., University of Minnesota, Minneapolis, MN 55455, USA  相似文献   

8.
The aim of this paper is to solve the basic stochastic shortest-path problem (SSPP) for Markov chains (MCs) with countable state space and then apply the results to a class of nearest-neighbor MCs on the lattice state space $\mathbb Z \times \mathbb Z $ whose only moves are one step up, down, to the right or to the left. The objective is to control the MC, by suppressing certain moves, so as to minimize the expected time to reach a certain given target state. We characterize the optimal policies for SSPPs for general MCs with countably infinite state space, the main tool being a verification theorem for the value function, and give an algorithmic construction. Then we apply the results to a large class of examples: nearest-neighbor MCs for which the state space $\mathbb Z \times \mathbb Z $ is split by a vertical line into two regions inside which the transition probabilities are the same for every state. We give a necessary and sufficient condition for the so-called distance-diminishing policy to be optimal. For the general case in which this condition does not hold we develop an explicit finite construction of an optimal policy.  相似文献   

9.
A general procedure for creating Markovian interest rate models is presented. The models created by this procedure automatically fit within the HJM framework and fit the initial term structure exactly. Therefore they are arbitrage free. Because the models created by this procedure have only one state variable per factor, twoand even three-factor models can be computed efficiently, without resorting to Monte Carlo techniques. This computational efficiency makes calibration of the new models to market prices straightforward. Extended Hull- White, extended CIR, Black-Karasinski, Jamshidian's Brownian path independent models, and Flesaker and Hughston's rational log normal models are one-state variable models which fit naturally within this theoretical framework. The ‘separable’ n-factor models of Cheyette and Li, Ritchken, and Sankarasubramanian - which require n(n + 3)/2 state variables - are degenerate members of the new class of models with n(n + 3)/2 factors. The procedure is used to create a new class of one-factor models, the ‘β-η models.’ These models can match the implied volatility smiles of swaptions and caplets, and thus enable one to eliminate smile error. The β-η models are also exactly solvable in that their transition densities can be written explicitly. For these models accurate - but not exact - formulas are presented for caplet and swaption prices, and it is indicated how these closed form expressions can be used to efficiently calibrate the models to market prices.  相似文献   

10.
11.
We argue that the spectral theory of non-reversible Markov chains may often be more effectively cast within the framework of the naturally associated weighted-L space ${L_\infty^V}$ , instead of the usual Hilbert space L 2?=?L 2(π), where π is the invariant measure of the chain. This observation is, in part, based on the following results. A discrete-time Markov chain with values in a general state space is geometrically ergodic if and only if its transition kernel admits a spectral gap in ${L_\infty^V}$ . If the chain is reversible, the same equivalence holds with L 2 in place of ${L_\infty^V}$ . In the absence of reversibility it fails: There are (necessarily non-reversible, geometrically ergodic) chains that admit a spectral gap in ${L_\infty^V}$ but not in L 2. Moreover, if a chain admits a spectral gap in L 2, then for any ${h\in L_2}$ there exists a Lyapunov function ${V_h\in L_1}$ such that V h dominates h and the chain admits a spectral gap in ${L_\infty^{V_h}}$ . The relationship between the size of the spectral gap in ${L_\infty^V}$ or L 2, and the rate at which the chain converges to equilibrium is also briefly discussed.  相似文献   

12.
Let be a homogeneous Markov chain on an unbounded Borel subset of with a drift function which tends to a limit at infinity. Under a very simple hypothesis on the chain we prove that converges in distribution to a normal law where the variance depends on the asymptotic behaviour of . When goes to zero quickly enough and , the random centering may be replaced by These results are applied to the case of random walks on some hypergroups.

  相似文献   


13.
We obtain sufficient criteria for central limit theorems (CLTs) for ergodic continuous-time Markov chains (CTMCs). We apply the results to establish CLTs for continuous-time single birth processes. Moreover, we present an explicit expression of the time average variance constant for a single birth process whenever a CLT exists. Several examples are given to illustrate these results.  相似文献   

14.
Multistate transition models are increasingly used in credit risk applications as they allow us to quantify the evolution of the process among different states. If the process is Markov, analysis and prediction are substantially simpler, so analysts would like to use these models if they are applicable. In this paper, we develop a procedure for assessing the Markov hypothesis and discuss different ways of implementing the test procedure. One issue when sample size is large is that the statistical test procedures will detect even small deviations from the Markov model when these differences are not of practical interest. To address this problem, we propose an approach to formulate and test the null hypothesis of “weak non‐Markov.” The situation where the transition probabilities are heterogeneous is also examined, and approaches to accommodate this case are indicated. Simulation studies are used extensively to study the properties of the procedures, and two applications are to illustrate the results.  相似文献   

15.
16.
The following modification of a general state space discrete-time Markov chain is considered: certain transitions are supposed “forbidden” and the chain evolves until there is such a transition. At this instant the value of the chain is “replaced” according to a given rule, and, starting from the new value, the chain evolves normally until there is a forbidden transition again; the cycle is then repeated. The relationship of this modified process to the original one is studied in general terms, with particular emphasis being given to invariant measures. Examples are given which illustrate the results obtained.  相似文献   

17.
We obtain an upper escape rate function for a continuous time minimal symmetric Markov chain defined on a locally finite weighted graph. This upper rate function, which has the same form as the manifold setting, is given in terms of the volume growth with respect to an adapted path metric. Our approach also gives a weak form of Folz’s theorem on the conservativeness as a consequence.  相似文献   

18.
In this paper we extend standard dynamic programming results for the risk sensitive optimal control of discrete time Markov chains to a new class of models. The state space is only finite, but now the assumptions about the Markov transition matrix are much less restrictive. Our results are then applied to the financial problem of managing a portfolio of assets which are affected by Markovian microeconomic and macroeconomic factors and where the investor seeks to maximize the portfolio's risk adjusted growth rate.  相似文献   

19.
A reduced system is a smaller system derived in the process of analyzing a larger system. While solving for steady-state probabilities of a Markov chain, generally the solution can be found by first solving a reduced system of equations which is obtained by appropriately partitioning the transition probability matrix. In this paper, we catagorize reduced systems as standard and nonstandard and explore the existence of reduced systems and their properties relative to the original system. We also discuss first passage probabilities and means for the standard reduced system relative to the original system. These properties are illustrated while determining the steady-state probabilities and first passage time characteristics of a queueing system.  相似文献   

20.
Let P be a transition matrix which is symmetric with respect to a measure π.The spectral gap of P in L2(π)-space,denoted by gap(P),is defined as the distance between 1 and the rest of the spectrum of P.In this paper,we study the relationship between gap(P) and the convergence rate of Pn.When P is transient,the convergence rate of P n is equal to 1 gap(P).When P is ergodic,we give the explicit upper and lower bounds for the convergence rate of Pn in terms of gap(P).These results are extended to L∞(π)-space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号