首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
We show uniqueness of the spine of a Fleming–Viot particle system under minimal assumptions on the driving process. If the driving process is a continuous time Markov process on a finite space, we show that asymptotically, when the number of particles goes to infinity, the distribution of the spine converges to that of the driving process conditioned to stay alive forever, the branching rate for the spine is twice that of a generic particle in the system, and every side branch has the distribution of the unconditioned generic branching tree.  相似文献   

2.
In this paper, we study the two-sided taboo limit processes that arise when a Markov chain or process is conditioned on staying in some set A for a long period of time. The taboo limit is time-homogeneous after time 0 and time-inhomogeneous before time 0. The time-reversed limit has this same qualitative structure. The precise transition structure at the taboo limit is identified in the context of discrete- and continuous-time Markov chains, as well as diffusions. In addition, we present a perfect simulation algorithm for generating exact samples from the quasi-stationary distribution of a finite-state Markov chain.  相似文献   

3.
Focusing on stochastic systems arising in mean-field models, the systems under consideration belong to the class of switching diffusions, in which continuous dynamics and discrete events coexist and interact. The discrete events are modeled by a continuous-time Markov chain. Different from the usual switching diffusions, the systems include mean-field interactions. Our effort is devoted to obtaining laws of large numbers for the underlying systems. One of the distinct features of the paper is the limit of the empirical measures is not deterministic but a random measure depending on the history of the Markovian switching process. A main difficulty is that the standard martingale approach cannot be used to characterize the limit because of the coupling due to the random switching process. In this paper, in contrast to the classical approach, the limit is characterized as the conditional distribution (given the history of the switching process) of the solution to a stochastic McKean–Vlasov differential equation with Markovian switching.  相似文献   

4.
In this paper we present a martingale related to the exit measures of super Brownian motion. By changing measure with this martingale in the canonical way we have a new process associated with the conditioned exit measure. This measure is shown to be identical to a measure generated by a non-homogeneous branching particle system with immigration of mass. An application is given to the problem of conditioning the exit measure to hit a number of specified points on the boundary of a domain. The results are similar in flavor to the “immortal particle” picture of conditioned super Brownian motion but more general, as the change of measure is given by a martingale which need not arise from a single harmonic function. Received: 27 August 1998 / Revised version: 8 January 1999  相似文献   

5.
Summary Local mean-field Markov processes are constructed from local mean-field dynamical semigroups of Markov transition operators. This provides a general scheme for the convergence of empirical measure processes for tagged particles in the thermodynamic limit of classical interacting particle systems. As an application the Poissonian approximation for message-switching queueing networks is investigated.  相似文献   

6.
In this paper shift ergodicity and related topics are studied for certain stationary processes. We first present a simple proof of the conclusion that every stationary Markov process is a generalized convex combination of stationary ergodic Markov processes. A direct consequence is that a stationary distribution of a Markov process is extremal if and only if the corresponding stationary Markov process is time ergodic and every stationary distribution is a generalized convex combination of such extremal ones. We then consider space ergodicity for spin flip particle systems. We prove space shift ergodicity and mixing for certain extremal invariant measures for a class of spin systems, in which most of the typical models, such as the Voter Models and the Contact Models, are included. As a consequence of these results we see that for such systems, under each of those extremal invariant measures, the space and time means of an observable coincide, an important phenomenon in statistical physics. Our results provide partial answers to certain interesting problems in spin systems.  相似文献   

7.
We study the discrete-time evolution of a recombination transformation in population genetics. The transformation acts on a product probability space, and its evolution can be described by a Markov chain on a set of partitions that converges to the finest partition. We describe the geometric decay rate to this limit and the quasi-stationary behavior of the Markov chain when conditioned on the event that the chain does not hit the limit.  相似文献   

8.
In the set of finite binary sequences a Markov process is defined with discrete time in which each symbol of the binary sequence at time t+1 depends on the two neighboring symbols at time t. A proof is given of the existence and uniqueness of an invariant distribution, and its derivation is also given in a number of cases.Translated from Matematicheskie Zametki, Vol. 6. No. 5, pp. 555–566, November, 1969.  相似文献   

9.
The concept of a limiting conditional age distribution of a continuous time Markov process whose state space is the set of non-negative integers and for which {0} is absorbing is defined as the weak limit as t→∞ of the last time before t an associated “return” Markov process exited from {0} conditional on the state, j, of this process at t. It is shown that this limit exists and is non-defective if the return process is ρ-recurrent and satisfies the strong ratio limit property. As a preliminary to the proof of the main results some general results are established on the representation of the ρ-invariant measure and function of a Markov process. The conditions of the main results are shown to be satisfied by the return process constructed from a Markov branching process and by birth and death processes. Finally, a number of limit theorems for the limiting age as j→∞ are given.  相似文献   

10.
We propose a particle system of diffusion processes coupled through a chain-like network structure described by an infinite-dimensional, nonlinear stochastic differential equation of McKean–Vlasov type. It has both (i) a local chain interaction and (ii) a mean-field interaction. It can be approximated by a limit of finite particle systems, as the number of particles goes to infinity. Due to the local chain interaction, propagation of chaos does not necessarily hold. Furthermore, we exhibit a dichotomy of presence or absence of mean-field interaction, and we discuss the problem of detecting its presence from the observation of a single component process.  相似文献   

11.
In number lotteries people choose r numbers out of s. Weekly published “drawings since hit tables” indicate how many drawings have taken place since each of the s numbers was last selected as a winning number. Among many lotto players, they enhance the widespread belief that numbers should be “due” if they have not come up for a long time. Under the assumptions of independence of the drawings and equiprobability of all possible combinations, the random s-vectors Yn, n 1, of entries in a drawings since hit table after n drawings form a Markov chain. The limit distribution of Yn as n → ∞ is a new multivariate generalization of the geometric distribution. The determination of the distribution of the maximum entry in a drawings since hit table within the first n draws of a lottery seems to be an open problem.  相似文献   

12.
We study a BMAP/>SM/1 queue with batch Markov arrival process input and semi‐Markov service. Service times may depend on arrival phase states, that is, there are many types of arrivals which have different service time distributions. The service process is a heterogeneous Markov renewal process, and so our model necessarily includes known models. At first, we consider the first passage time from level {κ+1} (the set of the states that the number of customers in the system is κ+1) to level {κ} when a batch arrival occurs at time 0 and then a customer service included in that batch simultaneously starts. The service descipline is considered as a LIFO (Last‐In First‐Out) with preemption. This discipline has the fundamental role for the analysis of the first passage time. Using this first passage time distribution, the busy period length distribution can be obtained. The busy period remains unaltered in any service disciplines if they are work‐conserving. Next, we analyze the stationary workload distribution (the stationary virtual waiting time distribution). The workload as well as the busy period remain unaltered in any service disciplines if they are work‐conserving. Based on this fact, we derive the Laplace–Stieltjes transform for the stationary distribution of the actual waiting time under a FIFO discipline. In addition, we refer to the Laplace–Stieltjes transforms for the distributions of the actual waiting times of the individual types of customers. Using the relationship between the stationary waiting time distribution and the stationary distribution of the number of customers in the system at departure epochs, we derive the generating function for the stationary joint distribution of the numbers of different types of customers at departures. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

13.
We consider a long Lorentz tube with absorbing boundaries. Particles are injected into the tube from the left end. We compute the equilibrium density profiles in two cases: the semi‐infinite tube (in which case the density is constant) and a long finite tube (in which case the density is linear). In the latter case, we also show that convergence to equilibrium is well described by the heat equation. In order to prove these results, we obtain new results for the Lorentz particle that are of independent interest. First, we show that a particle conditioned not to hit the boundary for a long time converges to the Brownian meander. Second, we prove several local limit theorems for particles having a prescribed behavior in the past. © 2016 Wiley Periodicals, Inc.  相似文献   

14.
Markov models are commonly used in modelling many practical systems such as telecommunication systems, manufacturing systems and inventory systems. However, higher-order Markov models are not commonly used in practice because of their huge number of states and parameters that lead to computational difficulties. In this paper, we propose a higher-order Markov model whose number of states and parameters are linear with respect to the order of the model. We also develop efficient estimation methods for the model parameters. We then apply the model and method to solve the generalised Newsboy's problem. Numerical examples with applications to production planning are given to illustrate the power of our proposed model.  相似文献   

15.
We consider the so-called gambler's ruin problem for a discrete-time Markov chain that converges to a Cox–Ingersoll–Ross (CIR) process. Both the probability that the chain will hit a given boundary before the other and the average number of transitions needed to end the game are computed explicitly. Furthermore, we show that the quantities that we obtained tend to the corresponding ones for the CIR process. A real-life application to a problem in hydrology is presented.  相似文献   

16.
A space-time random set is defined and methods of its parameters estimation are investigated. The evolution in discrete time is described by a state-space model. The observed output is a planar union of interacting discs given by a probability density with respect to a reference Poisson process of discs. The state vector is to be estimated together with auxiliary parameters of transitions caused by a random walk. Three methods of parameters estimation are involved, first of which is the maximum likelihood estimation (MLE) for individual outputs at fixed times. In the space-time model the state vector can be estimated by the particle filter (PF), where MLE serves to the estimation of auxiliary parameters. In the present paper the aim is to compare MLE and PF with particle Markov chain Monte Carlo (PMCMC). From the group of PMCMC methods we use specially the particle marginal Metropolis-Hastings (PMMH) algorithm which updates simultaneously the state vector and the auxiliary parameters. A simulation study is presented in which all estimators are compared by means of the integrated mean square error. New data are then simulated repeatedly from the model with parameters estimated by PMMH and the fit with the original model is quantified by means of the spherical contact distribution function.  相似文献   

17.
The purpose of this paper is to introduce and construct a state dependent counting and persistent random walk. Persistence is imbedded in a Markov chain for predicting insured claims based on their current and past period claim. We calculate for such a process, the probability generating function of the number of claims over time and as a result are able to calculate their moments. Further, given the claims severity probability distribution, we provide both the claims process generating function as well as the mean and the claim variance that an insurance firm confronts over a given period of time and in such circumstances. A number of results and applictions are then outlined (such as a Compound Claim Persistence Process).  相似文献   

18.
We consider a discrete-time Markov chain on the non-negative integers with drift to infinity and study the limiting behavior of the state probabilities conditioned on not having left state 0 for the last time. Using a transformation, we obtain a dual Markov chain with an absorbing state such that absorption occurs with probability 1. We prove that the state probabilities of the original chain conditioned on not having left state 0 for the last time are equal to the state probabilities of its dual conditioned on non-absorption. This allows us to establish the simultaneous existence, and then equivalence, of their limiting conditional distributions. Although a limiting conditional distribution for the dual chain is always a quasi-stationary distribution in the usual sense, a similar statement is not possible for the original chain.  相似文献   

19.
Heatwaves are defined as a set of hot days and nights that cause a marked short-term increase in mortality. Obtaining accurate estimates of the probability of an event lasting many days is important. Previous studies of temporal dependence of extremes have assumed either a first-order Markov model or a particularly strong form of extremal dependence, known as asymptotic dependence. Neither of these assumptions is appropriate for the heatwaves that we observe for our data. A first-order Markov assumption does not capture whether the previous temperature values have been increasing or decreasing and asymptotic dependence does not allow for asymptotic independence, a broad class of extremal dependence exhibited by many processes including all non-trivial Gaussian processes. This paper provides a kth-order Markov model framework that can encompass both asymptotic dependence and asymptotic independence structures. It uses a conditional approach developed for multivariate extremes coupled with copula methods for time series. We provide novel methods for the selection of the order of the Markov process that are based upon only the structure of the extreme events. Under this new framework, the observed daily maximum temperatures at Orleans, in central France, are found to be well modelled by an asymptotically independent third-order extremal Markov model. We estimate extremal quantities, such as the probability of a heatwave event lasting as long as the devastating European 2003 heatwave event. Critically our method enables the first reliable assessment of the sensitivity of such estimates to the choice of the order of the Markov process.  相似文献   

20.
Motivated by applications in telecommunications, computer science and physics, we consider a discrete-time Markov process with restart. At each step the process either with a positive probability restarts from a given distribution, or with the complementary probability continues according to a Markov transition kernel. The main contribution of the present work is that we obtain an explicit expression for the expectation of the hitting time (to a given target set) of the process with restart. The formula is convenient when considering the problem of optimization of the expected hitting time with respect to the restart probability. We illustrate our results with two examples in uncountable and countable state spaces and with an application to network centrality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号