共查询到20条相似文献,搜索用时 10 毫秒
1.
Revindra M. Phatarfod 《Stochastic Processes and their Applications》1982,13(3):279-292
It is known that the main difficulty in applying the Markovian analogue of Wald's Identity is the presence, in the Identity, of the last state variable before the random walk is terminated. In this paper we show that this difficulty can be overcome if the underlying Markov chain has a finite state space. The absorption probabilities thus obtained are used, by employing a duality argument, to derive time-dependent and limiting probabilities for the depletion process of a dam with Markovian inputs.The second problem that is considered here is that of a non-homogeneous but cyclic Markov chain. An analogue of Wald's Identity is obtained for this case, and is used to derive time- dependent and limiting probabilities for the depletion process with inputs forming a non- homogeneous (cyclic) Markov chain. 相似文献
2.
David J. Aldous 《Journal of Theoretical Probability》1989,2(1):91-100
For simple random walk on aN-vertex graph, the mean time to cover all vertices is at leastcN log(N), wherec>0 is an absolute constant. This is deduced from a more general result about stationary finite-state reversible Markov chains. Under weak conditions, the covering time for such processes is at leastc times the covering time for the corresponding i.i.d. process. 相似文献
3.
??An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We
also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method. 相似文献
4.
Computing the Stationary Distribution of Absorbing Markov Chains with One Eigenvector of Diagonalizable Transition Matrices 下载免费PDF全文
An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We
also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method. 相似文献
5.
Steven P. Lalley 《Journal of Theoretical Probability》1995,8(3):571-599
A random walk with reflecting zone on the nonnegative integers is a Markov chain whose transition probabilitiesq(x, y) are those of a random walk (i.e.,q(x, y)=p(y–x)) outside a finite set {0, 1, 2,...,K}, and such that the distributionq(x,·) stochastically dominatesp(·–x) for everyx{0, 1, 2,..., K}. Under mild hypotheses, it is proved that when xp
x>0, the transition probabilities satisfyq
n(x, y)CxyR–nn–3/2 asn, and when xp
x=0,q
n(x, y)Cxyn–1/2.Supported by National Science Foundation Grant DMS-9307855. 相似文献
6.
In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTRE(+) by an initial distribution Φ and a random Markov kernel (RMK) p(γ). In Section 3, the authors es-tablish several equivalence theorems on MCSTRE and MCSTRE(+). Finally, the authors give two very important examples of MCMSTRE, the random walk in spce-time random environment and the Markov br... 相似文献
7.
8.
We consider the question of whether the simple random walk (SRW) on an infinite tree is transient or recurrent. For random-trees (all vertices of distancen from the root of the tree have degreed
n
, where {d
n
} are independent random variables), we prove that the SRW is a.s. transient if lim inf
n
n
E(log(d
n-1))>1 and a.s. recurrent if lim sup
n
n E(log(d
n-1))<1. For random trees in which the degrees of the vertices are independently 2 or 3, with distribution depending on the distance from the root, a partial classification of type is obtained.Research supported in part by NSF DMS 8710027. 相似文献
9.
??The local limit theorems for the minimum of a random walk with
Markovian increments is given, with using Presman's factorization theory. This result
implies the asymptotic behaviour of the survival probability for a critical branching
process in Markovian depended random environment. 相似文献
10.
David J. Aldous 《Journal of Theoretical Probability》1991,4(1):197-211
Under a natural hypothesis, the cover time for a finite Markov chain can be approximated by its expectation, as the size of state space tends to infinity. This result is deduced from an abstract result concerning covering, an unstructured set by i.i.d. arbitrarily distributed random subsets. 相似文献
11.
We prove a new transience criterion for Markov chains on an arbitrary state space and give a corollary for real-valued chains. We show by example that in the case of a homogeneous random walk with infinite mean the proposed sufficient conditions are close to those necessary. We give a new proof of the well-known criterion for finiteness of the supremum of a random walk. 相似文献
12.
The rotor‐router model, also known as the Propp machine, is a deterministic process analogous to a random walk on a graph. Instead of distributing tokens to randomly chosen neighbors, the Propp machine deterministically serves the neighbors in a fixed order by associating to each vertex a “rotor‐router” pointing to one of its neighbors. This paper investigates the discrepancy at a single vertex between the number of tokens in the rotor‐router model and the expected number of tokens in a random walk, for finite graphs in general. We show that the discrepancy is bounded by O (mn) at any time for any initial configuration if the corresponding random walk is lazy and reversible, where n and m denote the numbers of nodes and edges, respectively. For a lower bound, we show examples of graphs and initial configurations for which the discrepancy at a single vertex is Ω(m) at any time (> 0). For some special graphs, namely hypercube skeletons and Johnson graphs, we give a polylogarithmic upper bound, in terms of the number of nodes, for the discrepancy. © 2014 Wiley Periodicals, Inc. Random Struct. Alg., 46,739–761, 2015 相似文献
13.
Mary Allison 《Linear and Multilinear Algebra》2013,61(7):801-823
A symmetric, random walk on a graph G can be defined by prescribing weights to the edges in such a way that for each vertex the sum of the weights of the edges incident to the vertex is at most one. The fastest mixing, Markov chain (FMMC) problem for G is to determine the weighting that yields the fastest mixing random walk. We solve the FMMC problem in the case that G is the union of two complete graphs. 相似文献
14.
Kenneth S. Brown 《Journal of Theoretical Probability》2000,13(3):871-938
We analyze random walks on a class of semigroups called left-regular bands. These walks include the hyperplane chamber walks of Bidigare, Hanlon, and Rockmore. Using methods of ring theory, we show that the transition matrices are diagonalizable and we calculate the eigenvalues and multiplicities. The methods lead to explicit formulas for the projections onto the eigenspaces. As examples of these semigroup walks, we construct a random walk on the maximal chains of any distributive lattice, as well as two random walks associated with any matroid. The examples include a q-analogue of the Tsetlin library. The multiplicities of the eigenvalues in the matroid walks are generalized derangement numbers, which may be of independent interest. 相似文献
15.
Christensen O. F. Møller J. Waagepetersen R. P. 《Methodology and Computing in Applied Probability》2001,3(3):309-327
Conditional simulation is useful in connection with inference and prediction for a generalized linear mixed model. We consider random walk Metropolis and Langevin-Hastings algorithms for simulating the random effects given the observed data, when the joint distribution of the unobserved random effects is multivariate Gaussian. In particular we study the desirable property of geometric ergodicity, which ensures the validity of central limit theorems for Monte Carlo estimates. 相似文献
16.
We examine the long-run average availability and cost rate of a maintained system which deteriorates according to a random-shock process. Shocks arrive according to a Poisson process. The system fails whenever the cumulative damage exceeds a given threshold. The system's failures are not self-announcing, hence, failures must be detected via inspections. The system is inspected at periodic or exponentially distributed intervals. Systems are replaced by preventive maintenance or after failure (corrective maintenance), whichever occurs first. When the distribution function of the shock magnitudes belongs to the class of subexponential distributions, we obtain simple approximations for the availability and the cost rate. Copyright © 2007 John Wiley & Sons, Ltd. 相似文献
17.
Alessandro Baldi Antognini Paola Bortot Alessandra Giovagnoli 《Annals of the Institute of Statistical Mathematics》2008,60(1):45-59
An up and down (U&D) procedure is a sequential experiment used in binary response trials for identifying the treatment corresponding
to a prespecified probability of positive response. Recently, a group version of U&D procedures has been proposed whereby
at each stage a group of units is treated at the same level and the number of observed positive responses determines the treatment
assigned to the next group. The deterministic nature of this algorithm leads to some limitations that in this paper we propose
to overcome by introducing a randomization mechanism. A broad class of randomized group U&D’s is presented, giving the conditions
for targeting the treatment level of interest. In addition, we study how the properties of the design change as we vary the
method of randomization within this general class and find randomization schemes which guarantee desirable results in terms
of the asymptotic behavior of the experiment. 相似文献
18.
《随机分析与应用》2013,31(4):819-847
Abstract Neutral stochastic differential delay equations (NSDDEs) have recently been studied intensively (see Kolmanovskii, V.B. and Nosov, V.R., Stability and Periodic Modes of Control Systems with Aftereffect; Nauka: Moscow, 1981 and Mao X., Stochastic Differential Equations and Their Applications; Horwood Pub.: Chichester, 1997). Given that many systems are often subject to component failures or repairs, changing subsystem interconnections and abrupt environmental disturbances etc., the structure and parameters of underlying NSDDEs may change abruptly. One way to model such abrupt changes is to use the continuous‐time Markov chains. As a result, the underlying NSDDEs become NSDDEs with Markovian switching which are hybrid systems. So far little is known about the NSDDEs with Markovian switching and the aim of this paper is to close this gap. In this paper we will not only establish a fundamental theory for such systems but also discuss some important properties of the solutions e.g. boundedness and stability. 相似文献
19.