首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We analyze the role of recurrent mutation on the time to formation or detection of particular genotypes in finite populations. The traditional method of approximating Markov chain models by diffusion processes is used. However, the diffusion approximations that arise in this context are governed not only by infinitesimal drift and diffusion coefficients, but also by a state-dependent killing rate that arises from the formation events. The resulting processes are particularly tractable, and allow a comprehensive analysis of the role of mutation in this problem.  相似文献   

2.
3.
4.
The Moran process models the spread of mutations in populations on graphs. We investigate the absorption time of the process, which is the time taken for a mutation introduced at a randomly chosen vertex to either spread to the whole population, or to become extinct. It is known that the expected absorption time for an advantageous mutation is on an n‐vertex undirected graph, which allows the behaviour of the process on undirected graphs to be analysed using the Markov chain Monte Carlo method. We show that this does not extend to directed graphs by exhibiting an infinite family of directed graphs for which the expected absorption time is exponential in the number of vertices. However, for regular directed graphs, we show that the expected absorption time is and . We exhibit families of graphs matching these bounds and give improved bounds for other families of graphs, based on isoperimetric number. Our results are obtained via stochastic dominations which we demonstrate by establishing a coupling in a related continuous‐time model. The coupling also implies several natural domination results regarding the fixation probability of the original (discrete‐time) process, resolving a conjecture of Shakarian, Roos and Johnson. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 137–159, 2016  相似文献   

5.
Starting from a real-valued Markov chain X0,X1,…,Xn with stationary transition probabilities, a random element {Y(t);t[0, 1]} of the function space D[0, 1] is constructed by letting Y(k/n)=Xk, k= 0,1,…,n, and assuming Y (t) constant in between. Sample tightness criteria for sequences {Y(t);t[0,1]};n of such random elements in D[0, 1] are then given in terms of the one-step transition probabilities of the underlying Markov chains. Applications are made to Galton-Watson branching processes.  相似文献   

6.
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P.  相似文献   

7.
《随机分析与应用》2013,31(2):419-441
We consider the stochastic model of water pollution, which mathematically can be written with a stochastic partial differential equation driven by Poisson measure noise. We use a stochastic particle Markov chain method to produce an implementable approximate solution. Our main result is the annealed law of large numbers establishing convergence in probability of our Markov chains to the solution of the stochastic reaction-diffusion equation while considering the Poisson source as a random medium for the Markov chains.  相似文献   

8.
9.
Markov chains with periodic graphs arise frequently in a wide range of modelling experiments. Application areas range from flexible manufacturing systems in which pallets are treated in a cyclic manner to computer communication networks. The primary goal of this paper is to show how advantage may be taken of this property to reduce the amount of computer memory and computation time needed to compute stationary probability vectors of periodic Markov chains. After reviewing some basic properties of Markov chains whose associated graph is periodic, we introduce a ‘reduced scheme’ in which only a subset of the probability vector need be computed. We consider the effect of the application of both direct and iterative methods to the original Markov chain (permuted to normal periodic form) as well as to the reduced system. We show how the periodicity of the Markov chain may be efficiently computed as the chain itself is being generated. Finally, some numerical experiments to illustrate the theory, are presented.  相似文献   

10.
Ruin Probabilities under a Markovian Risk Model   总被引:5,自引:0,他引:5  
In this paper, a Markovian risk model is developed, in which the occurrence of the claims is described by a point process {N(t)}t≥0 with N(t) being the number of jumps of a Markov chain during the interval [0, t]. For the model, the explicit form of the ruin probability ψ(0) and the bound for the convergence rate of the ruin probability ψ(u) are given by using the generalized renewal technique developed in this paper.Finally, we prove that the ruin probability ψ(u) is a linear combination of some negative exponential functions in a special case when the claims are exponentially distributed and the Markov chain has an intensity matrix(qij)i,j∈E such that qm = qml and qi=qi(i 1), 1≤i≤m-1.  相似文献   

11.
12.
树指标马氏链的等价定义   总被引:1,自引:0,他引:1  
国内外关于树指标随机过程的研究已经取得了一定的成果.Benjamini和Peres首先给出了树指标马氏链的定义.Berger和叶中行研究了齐次树图上平稳随机场熵率的存在性.杨卫国与刘文研究了树上马氏场的强大数定律与渐近均分性.杨卫国又研究了一般树指标马氏链的强大数定律.为了以后更有效的研究树指标随机过程的一系列相关问题,本文在分析研究前人成果的基础上,给出了树指标马氏链的等价定义,并用数学归纳法证明了其等价性.  相似文献   

13.
This paper discusses an efficient method to compute mean passage times and absorption probabilities in Markov and Semi-Markov models. It uses the state reduction approach introduced by Winfried Grassmann for the computation of the stationary distribution of a Markov model. The method is numerically stable and has a simple probabilistic interpretation. It is especially stressed, that the natural frame for the state reduction method is rather Semi-Markov theory than Markov theory.
Zusammenfassung Es wird ein wirkungsvolles Rechenverfahren zur Bestimmung von mittleren Zeiten bis zur Absorption und von Absorptions-Wahrscheinlichkeiten in Markoff- und Semi-Markoff-Modellen dargestellt. Die Methode beruht auf dem Zustands-Reduktions-Ansatz, der von Grassmann für die Berechnung stationärer Verteilungen von Markoff-Ketten eingeführt wurde. Das Verfahren ist numerisch stabil und hat eine einfache wahrscheinlichkeitstheoretische Interpretation. Es wird hervorgehoben, da\ der natürliche Rahmen der Methode eher die Semi-Markoff-Theorie als die Markoff-Theorie ist.
  相似文献   

14.
Let (Yt)t0 be an ergodic diffusion with invariant distribution ν. Consider the empirical measure νn(k=1nγk)1k=1nγkδXk1 where (Xk)k0 is an Euler scheme with decreasing steps (γk)k0 which approximates (Yt)t0. Given a test function f, we obtain sharp concentration inequalities for νn(f)ν(f) which improve the results in Honoré et al. (2019). Our hypotheses on the test function f cover many real applications: either f is supposed to be a coboundary of the infinitesimal generator of the diffusion, or f is supposed to be Lipschitz.  相似文献   

15.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

16.
All known results on large deviations of occupation measures of Markov processes are based on the assumption of (essential) irreducibility. In this paper we establish the weak* large deviation principle of occupation measures for any countable Markov chain with arbitrary initial measures. The new rate function that we obtain is not convex and depends on the initial measure, contrary to the (essentially) irreducible case.  相似文献   

17.
For a real number p with 1<p we consider the first eigenvalues of the p-Laplacian on graphs, and estimates for the solutions of p-Laplace equations on graphs. We provide a discrete version of Picone's identity and its application. More precisely, we prove a Barta-type inequality for graphs with boundary. Finally, we provide a discrete version of the anti-maximum principle.  相似文献   

18.
《Optimization》2012,61(12):1427-1447
This article is concerned with the limiting average variance for discrete-time Markov control processes in Borel spaces, subject to pathwise constraints. Under suitable hypotheses we show that within the class of deterministic stationary optimal policies for the pathwise constrained problem, there exists one with a minimal variance.  相似文献   

19.
An extension of the work of P. Mandl concerning the optimal control of time-homogeneous diffusion processes in one dimension is given. Instead of a classical second order differential operator as infinitesimal generator, Feller's generalized differential operator DmD+p with a possibly nondecreasing weight function m is used. In this manner an optimal control of a wider class of one dimensional Marcov processes-including diffusions as well as birth and death processes-is realized.  相似文献   

20.
本文研究了随机环境中的多物种分枝游动于时刻k,位置x的质点密度矩阵序列{M~(k)(x)}k>1的极限分布。我们在证明了M~(k)(x),k>1,x∈Z是k个独立同分布的矩阵值随机元的乘积的基础上,主要证明了随机序列{logM_(ij)~(k)(x)}k>1依某种意义规范后是渐近正态的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号