共查询到20条相似文献,搜索用时 31 毫秒
1.
Denise Duarte Antonio Galves Nancy L. Garcia 《Bulletin of the Brazilian Mathematical Society》2006,37(4):581-592
We consider infinite order chains whose transition probabilities depend on a finite suffix of the past. These suffixes are
of variable length and the set of the lengths of all suffix is unbounded. We assume that the probability transitions for each
of these suffixes are continuous with exponential decay rate. For these chains, we prove the weak consistency of a modification
of Rissanen's algorithm Context which estimates the length of the suffix needed to predict the next symbol, given a finite sample. This generalizes to the
unbounded case the original result proved for variable length Markov chains in the seminal paper Rissanen (1983). Our basic
tool is the canonical Markov approximation which enables to approximate the chain of infinite order by a sequence of variable
length Markov chains of increasing order. Our proof is constructive and we present an explicit decreasing upper bound for
the probability of wrong estimation of the length of the current suffix. 相似文献
2.
This paper develops bounds on the rate of decay of powers of Markov kernels on finite state spaces. These are combined with eigenvalue estimates to give good bounds on the rate of convergence to stationarity for finite Markov chains whose underlying graph has moderate volume growth. Roughly, for such chains, order (diameter) steps are necessary and suffice to reach stationarity. We consider local Poincaré inequalities and use them to prove Nash inequalities. These are bounds onl
2-norms in terms of Dirichlet forms andl
1-norms which yield decay rates for iterates of the kernel. This method is adapted from arguments developed by a number of authors in the context of partial differential equations and, later, in the study of random walks on infinite graphs. The main results do not require reversibility. 相似文献
3.
Elizabeth L. Wilmer 《Journal of Theoretical Probability》2003,16(3):751-770
By proving a local limit theorem for higher-order transitions, we determine the time required for necklace chains to be close to stationarity. Because necklace chains, built by arranging identical smaller Markov chains around a directed cycle, are not reversible, have little symmetry, do not have uniform stationary distributions, and can be nearly periodic, prior general bounds on rates of convergence of Markov chains either do not apply or give poor bounds. Necklace chains can serve as test cases for future techniques for bounding rates of convergence. 相似文献
4.
Persi Diaconis and Phil Hanlon in their interesting paper(4) give the rates of convergence of some Metropolis Markov chains on the cubeZ
d
(2). Markov chains on finite groups that are actually random walks are easier to analyze because the machinery of harmonic analysis is available. Unfortunately, Metropolis Markov chains are, in general, not random walks on group structure. In attempting to understand Diaconis and Hanlon's work, the authors were led to the idea of a hypergroup deformation of a finite groupG, i.e., a continuous family of hypergroups whose underlying space isG and whose structure is naturally related to that ofG. Such a deformation is provided forZ
d
(2), and it is shown that the Metropolis Markov chains studied by Diaconis and Hanlon can be viewed as random walks on the deformation. A direct application of the Diaconis-Shahshahani Upper Bound Lemma, which applies to random walks on hypergroups, is used to obtain the rate of convergence of the Metropolis chains starting at any point. When the Markov chains start at 0, a result in Diaconis and Hanlon(4) is obtained with exactly the same rate of convergence. These results are extended toZ
d
(3).Research supported in part by the Office of Research and Sponsored Programs, University of Oregon. 相似文献
5.
Heyman gives an interesting factorization of I-P, where P is the transition probability matrix for an ergodic Markov chain. We show that this factorization is valid if and only if
the Markov chain is recurrent. Moreover, we provide a decomposition result which includes all ergodic, null recurrent as well
as the transient Markov chains as special cases. Such a decomposition has been shown to be useful in the analysis of queues.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
6.
7.
Sangita Kulathinal Lagnojita Ghosh 《International Journal of Mathematical Education in Science & Technology》2013,44(4):498-502
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M (≥ 2) states. 相似文献
8.
In this paper, we develop an algorithmic method for the evaluation of the steady state probability vector of a special class of finite state Markov chains. For the class of Markov chains considered here, it is assumed that the matrix associated with the set of linear equations for the steady state probabilities possess a special structure, such that it can be rearranged and decomposed as a sum of two matrices, one lower triangular with nonzero diagonal elements, and the other an upper triangular matrix with only very few nonzero columns. Almost all Markov chain models of queueing systems with finite source and/or finite capacity and first-come-first-served or head of the line nonpreemptive priority service discipline belongs to this special class. 相似文献
9.
《Journal of Computational and Applied Mathematics》1986,15(3):383-394
In this paper we consider Markov chains of the following type: the state space is the set of vertices of a connected, regular graph, and for each vertex transitions are to the adjacent vertices, which equal probabilities. The proof is given that the mean first-passage matrix F of such a Markov chain is symmetric, when the underlying graph is vertex-transitive. Hence, we can apply results from a previous paper, in which we investigated general, finite, ergodic Markov chains, with the property F= FT. 相似文献
10.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed. 相似文献
11.
We investigate how the stationary distribution of a Markov chain changes when transitions from a single state are modified. In particular, adding a single directed edge to nearest neighbor random walk on a finite discrete torus in dimensions one, two, or three changes the stationary distribution linearly, logarithmically, or only locally. Related results are derived for birth and death chains approximating Bessel diffusions and for random walk on the Sierpinski gasket. 相似文献
12.
Markov chains have been frequently used to characterize uncertainty in many real-world problems. Quite often, these Markov chains can be decomposed into a vector consisting of fast and slow components; these components are coupled through weak and strong interactions. The main goal of this work is to study the structural properties of such Markov chains. Under mild conditions, it is proved that the underlying Markov chain can be approximated in the weak topology of L2 by an aggregated process. Moreover, the aggregated process is shown to converge in distribution to a Markov chain as the rate of fast transitions tends to infinity. Under an additional Lipschitz condition, error bounds of the approximation sequences are obtained. 相似文献
13.
Eduardo J. Subelman 《Stochastic Processes and their Applications》1976,4(3):253-259
We study the necessary and sufficient conditions for a finite ergodic Markov chain to converge in a finite number of transitions to its stationary distribution. Using this result, we describe the class of Markov chains which attain the stationary distribution in a finite number of steps, independent of the initial distribution. We then exhibit a queueing model that has a Markov chain embedded at the points of regeneration that falls within this class. Finally, we examine the class of continuous time Markov processes whose embedded Markov chain possesses the property of rapid convergence, and find that, in the case where the distribution of sojourn times is independent of the state, we can compute the distribution of the system at time t in the form of a simple closed expression. 相似文献
14.
D. Racoceanu A. Elmoudni M. Ferney S. Zerhouni 《Mathematical and Computer Modelling of Dynamical Systems: Methods, Tools and Applications in Engineering and Related Sciences》2013,19(3):199-229
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting. The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event. The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property. The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain 相似文献
15.
Noah Forman Soumik Pal Douglas Rizzolo Matthias Winkel 《Random Structures and Algorithms》2020,57(3):745-769
Consider the Aldous Markov chain on the space of rooted binary trees with n labeled leaves in which at each transition a uniform random leaf is deleted and reattached to a uniform random edge. Now, fix 1 ≤ k<n and project the leaf mass onto the subtree spanned by the first k leaves. This yields a binary tree with edge weights that we call a “decorated k‐tree with total mass n.” We introduce label swapping dynamics for the Aldous chain so that, when it runs in stationarity, the decorated k‐trees evolve as Markov chains themselves, and are projectively consistent over k. The construction of projectively consistent chains is a crucial step in the construction of the Aldous diffusion on continuum trees by the present authors, which is the n→∞ continuum analog of the Aldous chain and will be taken up elsewhere. 相似文献
16.
Fabrice Guillemin Bruno Sericola 《Methodology and Computing in Applied Probability》2007,9(4):521-540
Motivated by queueing systems playing a key role in the performance evaluation of telecommunication networks, we analyze in
this paper the stationary behavior of a fluid queue, when the instantaneous input rate is driven by a continuous-time Markov
chain with finite or infinite state space. In the case of an infinite state space and for particular classes of Markov chains
with a countable state space, such as quasi birth and death processes or Markov chains of the G/M/1 type, we develop an algorithm
to compute the stationary probability distribution function of the buffer level in the fluid queue. This algorithm relies
on simple recurrence relations satisfied by key characteristics of an auxiliary queueing system with normalized input rates.
相似文献
17.
We study a Markov chain on generating n-tuples of a fixed group which arises in algorithms for manipulating finite groups. The main tools are comparison of two Markov
chains on different but related state spaces and combinatorics of random paths. The results involve group theoretical parameters
such as the size of minimal generating sets, the number of distinct generating k-tuples for different k's and the maximal diameter of the group.
Oblatum 6-VIII-1996 & 6-XI-97 相似文献
18.
James Allen Fill 《Journal of Theoretical Probability》2009,22(3):587-600
An (upward) skip-free Markov chain with the set of nonnegative integers as state space is a chain for which upward jumps may
be only of unit size; there is no restriction on downward jumps. In a 1987 paper, Brown and Shao determined, for an irreducible
continuous-time skip-free chain and any d, the passage time distribution from state 0 to state d. When the nonzero eigenvalues ν
j
of the generator on {0,…,d}, with d made absorbing, are all real, their result states that the passage time is distributed as the sum of d independent exponential random variables with rates ν
j
. We give another proof of their theorem. In the case of birth-and-death chains, our proof leads to an explicit representation
of the passage time as a sum of independent exponential random variables. Diaconis and Miclo recently obtained the first such
representation, but our construction is much simpler.
We obtain similar (and new) results for a fastest strong stationary time T of an ergodic continuous-time skip-free chain with stochastically monotone time-reversal started in state 0, and we also
obtain discrete-time analogs of all our results.
In the paper’s final section we present extensions of our results to more general chains.
Research supported by NSF grant DMS–0406104, and by The Johns Hopkins University’s Acheson J. Duncan Fund for the Advancement
of Research in Statistics. 相似文献
19.
Neil W. Henry Robert McGinnis Heinrich W. Tegtmeyer 《The Journal of mathematical sociology》2013,37(1):107-118
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular. 相似文献
20.
Moussa Kounta 《Journal of Difference Equations and Applications》2013,19(8):1276-1291
We consider the so-called gambler's ruin problem for a discrete-time Markov chain that converges to a Cox–Ingersoll–Ross (CIR) process. Both the probability that the chain will hit a given boundary before the other and the average number of transitions needed to end the game are computed explicitly. Furthermore, we show that the quantities that we obtained tend to the corresponding ones for the CIR process. A real-life application to a problem in hydrology is presented. 相似文献