共查询到20条相似文献,搜索用时 15 毫秒
1.
We analyze the role of recurrent mutation on the time to formation or detection of particular genotypes in finite populations. The traditional method of approximating Markov chain models by diffusion processes is used. However, the diffusion approximations that arise in this context are governed not only by infinitesimal drift and diffusion coefficients, but also by a state-dependent killing rate that arises from the formation events. The resulting processes are particularly tractable, and allow a comprehensive analysis of the role of mutation in this problem. 相似文献
2.
3.
4.
Josep Díaz Leslie Ann Goldberg David Richerby Maria Serna 《Random Structures and Algorithms》2016,49(1):137-159
The Moran process models the spread of mutations in populations on graphs. We investigate the absorption time of the process, which is the time taken for a mutation introduced at a randomly chosen vertex to either spread to the whole population, or to become extinct. It is known that the expected absorption time for an advantageous mutation is on an n‐vertex undirected graph, which allows the behaviour of the process on undirected graphs to be analysed using the Markov chain Monte Carlo method. We show that this does not extend to directed graphs by exhibiting an infinite family of directed graphs for which the expected absorption time is exponential in the number of vertices. However, for regular directed graphs, we show that the expected absorption time is and . We exhibit families of graphs matching these bounds and give improved bounds for other families of graphs, based on isoperimetric number. Our results are obtained via stochastic dominations which we demonstrate by establishing a coupling in a related continuous‐time model. The coupling also implies several natural domination results regarding the fixation probability of the original (discrete‐time) process, resolving a conjecture of Shakarian, Roos and Johnson. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 137–159, 2016 相似文献
5.
Anders Grimvall 《Stochastic Processes and their Applications》1973,1(4):335-368
Starting from a real-valued Markov chain X0,X1,…,Xn with stationary transition probabilities, a random element {Y(t);t[0, 1]} of the function space D[0, 1] is constructed by letting Y(k/n)=Xk, k= 0,1,…,n, and assuming Y (t) constant in between. Sample tightness criteria for sequences {Y(t);t[0,1]};n of such random elements in D[0, 1] are then given in terms of the one-step transition probabilities of the underlying Markov chains. Applications are made to Galton-Watson branching processes. 相似文献
6.
Thomas Kaijser 《数学学报(英文版)》2011,27(3):441-476
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. 相似文献
7.
《随机分析与应用》2013,31(2):419-441
We consider the stochastic model of water pollution, which mathematically can be written with a stochastic partial differential equation driven by Poisson measure noise. We use a stochastic particle Markov chain method to produce an implementable approximate solution. Our main result is the annealed law of large numbers establishing convergence in probability of our Markov chains to the solution of the stochastic reaction-diffusion equation while considering the Poisson source as a random medium for the Markov chains. 相似文献
8.
9.
Franois Bonhoure Yves Dallery William J. Stewart 《Numerical Linear Algebra with Applications》1994,1(3):265-286
Markov chains with periodic graphs arise frequently in a wide range of modelling experiments. Application areas range from flexible manufacturing systems in which pallets are treated in a cyclic manner to computer communication networks. The primary goal of this paper is to show how advantage may be taken of this property to reduce the amount of computer memory and computation time needed to compute stationary probability vectors of periodic Markov chains. After reviewing some basic properties of Markov chains whose associated graph is periodic, we introduce a ‘reduced scheme’ in which only a subset of the probability vector need be computed. We consider the effect of the application of both direct and iterative methods to the original Markov chain (permuted to normal periodic form) as well as to the reduced system. We show how the periodicity of the Markov chain may be efficiently computed as the chain itself is being generated. Finally, some numerical experiments to illustrate the theory, are presented. 相似文献
10.
Ruin Probabilities under a Markovian
Risk Model 总被引:5,自引:0,他引:5
Han-xingWang Da-fanFang Mao-ningTang 《应用数学学报(英文版)》2003,19(4):621-630
In this paper, a Markovian risk model is developed, in which the occurrence of the claims is described by a point process {N(t)}t≥0 with N(t) being the number of jumps of a Markov chain during the interval [0, t]. For the model, the explicit form of the ruin probability ψ(0) and the bound for the convergence rate of the ruin probability ψ(u) are given by using the generalized renewal technique developed in this paper.Finally, we prove that the ruin probability ψ(u) is a linear combination of some negative exponential functions in a special case when the claims are exponentially distributed and the Markov chain has an intensity matrix(qij)i,j∈E such that qm = qml and qi=qi(i 1), 1≤i≤m-1. 相似文献
11.
12.
13.
J. Kohlas 《Mathematical Methods of Operations Research》1986,30(5):A197-A207
This paper discusses an efficient method to compute mean passage times and absorption probabilities in Markov and Semi-Markov models. It uses the state reduction approach introduced by Winfried Grassmann for the computation of the stationary distribution of a Markov model. The method is numerically stable and has a simple probabilistic interpretation. It is especially stressed, that the natural frame for the state reduction method is rather Semi-Markov theory than Markov theory.
Zusammenfassung Es wird ein wirkungsvolles Rechenverfahren zur Bestimmung von mittleren Zeiten bis zur Absorption und von Absorptions-Wahrscheinlichkeiten in Markoff- und Semi-Markoff-Modellen dargestellt. Die Methode beruht auf dem Zustands-Reduktions-Ansatz, der von Grassmann für die Berechnung stationärer Verteilungen von Markoff-Ketten eingeführt wurde. Das Verfahren ist numerisch stabil und hat eine einfache wahrscheinlichkeitstheoretische Interpretation. Es wird hervorgehoben, da\ der natürliche Rahmen der Methode eher die Semi-Markoff-Theorie als die Markoff-Theorie ist.相似文献
14.
《Stochastic Processes and their Applications》2020,130(4):2127-2158
Let be an ergodic diffusion with invariant distribution . Consider the empirical measure where is an Euler scheme with decreasing steps which approximates . Given a test function , we obtain sharp concentration inequalities for which improve the results in Honoré et al. (2019). Our hypotheses on the test function cover many real applications: either is supposed to be a coboundary of the infinitesimal generator of the diffusion, or is supposed to be Lipschitz. 相似文献
15.
D. Racoceanu A. Elmoudni M. Ferney S. Zerhouni 《Mathematical and Computer Modelling of Dynamical Systems: Methods, Tools and Applications in Engineering and Related Sciences》2013,19(3):199-229
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting. The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event. The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property. The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain 相似文献
16.
Large Deviations for Empirical Measures of Not Necessarily Irreducible Countable Markov Chains with Arbitrary Initial Measures 总被引:1,自引:0,他引:1
Yi Wen JIANG Li Ming WU 《数学学报(英文版)》2005,21(6):1377-1390
All known results on large deviations of occupation measures of Markov processes are based on the assumption of (essential) irreducibility. In this paper we establish the weak* large deviation principle of occupation measures for any countable Markov chain with arbitrary initial measures. The new rate function that we obtain is not convex and depends on the initial measure, contrary to the (essentially) irreducible case. 相似文献
17.
S. Amghibech 《Discrete Applied Mathematics》2008,156(1):1-10
For a real number p with 1<p we consider the first eigenvalues of the p-Laplacian on graphs, and estimates for the solutions of p-Laplace equations on graphs. We provide a discrete version of Picone's identity and its application. More precisely, we prove a Barta-type inequality for graphs with boundary. Finally, we provide a discrete version of the anti-maximum principle. 相似文献
18.
《Optimization》2012,61(12):1427-1447
This article is concerned with the limiting average variance for discrete-time Markov control processes in Borel spaces, subject to pathwise constraints. Under suitable hypotheses we show that within the class of deterministic stationary optimal policies for the pathwise constrained problem, there exists one with a minimal variance. 相似文献
19.
Jürgen Groh 《Stochastic Processes and their Applications》1980,10(3):271-297
An extension of the work of P. Mandl concerning the optimal control of time-homogeneous diffusion processes in one dimension is given. Instead of a classical second order differential operator as infinitesimal generator, Feller's generalized differential operator DmD+p with a possibly nondecreasing weight function m is used. In this manner an optimal control of a wider class of one dimensional Marcov processes-including diffusions as well as birth and death processes-is realized. 相似文献
20.
本文研究了随机环境中的多物种分枝游动于时刻k,位置x的质点密度矩阵序列{M~(k)(x)}k>1的极限分布。我们在证明了M~(k)(x),k>1,x∈Z是k个独立同分布的矩阵值随机元的乘积的基础上,主要证明了随机序列{logM_(ij)~(k)(x)}k>1依某种意义规范后是渐近正态的。 相似文献