首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a reparable system with a finite state space, evolving in time according to a semi‐Markov process. The system is stopped for it to be preventively maintained at random times for a random duration. Our aim is to find the preventive maintenance policy that optimizes the stationary availability, whenever it exists. The computation of the stationary availability is based on the fact that the above maintained system evolves according to a semi‐regenerative process. As for the optimization, we observe on numerical examples that it is possible to limit the study to the maintenance actions that begin at deterministic times. We demonstrate this result in a particular case and we study the deterministic maintenance policies in that case. In particular, we show that, if the initial system has an increasing failure rate, the maintenance actions improve the stationary availability if and only if they are not too long on the average, compared to the repairs ( a bound for the mean duration of the maintenance actions is provided). On the contrary, if the initial system has a decreasing failure rate, the maintenance policy lowers the stationary availability. A few other cases are studied. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
Consider an N-dimensional Markov chain obtained from N one-dimensional random walks by Doob h-transform with the q-Vandermonde determinant. We prove that as N becomes large, these Markov chains converge to an infinite-dimensional Feller Markov process. The dynamical correlation functions of the limit process are determinantal with an explicit correlation kernel. The key idea is to identify random point processes on ${\mathbb Z}$ with q-Gibbs measures on Gelfand–Tsetlin schemes and construct Markov processes on the latter space. Independently, we analyze the large time behavior of PushASEP with finitely many particles and particle-dependent jump rates (it arises as a marginal of our dynamics on Gelfand–Tsetlin schemes). The asymptotics is given by a product of a marginal of the GUE-minor process and geometric distributions.  相似文献   

3.
A classical damping Hamiltonian system perturbed by a random force is considered. The locally uniform large deviation principle of Donsker and Varadhan is established for its occupation empirical measures for large time, under the condition, roughly speaking, that the force driven by the potential grows infinitely at infinity. Under the weaker condition that this force remains greater than some positive constant at infinity, we show that the system converges to its equilibrium measure with exponential rate, and obeys moreover the moderate deviation principle. Those results are obtained by constructing appropriate Lyapunov test functions, and are based on some results about large and moderate deviations and exponential convergence for general strong-Feller Markov processes. Moreover, these conditions on the potential are shown to be sharp.  相似文献   

4.
We use random spanning forests to find, for any Markov process on a finite set of size n and any positive integer \(m \le n\), a probability law on the subsets of size m such that the mean hitting time of a random target that is drawn from this law does not depend on the starting point of the process. We use the same random forests to give probabilistic insights into the proof of an algebraic result due to Micchelli and Willoughby and used by Fill and by Miclo to study absorption times and convergence to equilibrium of reversible Markov chains. We also introduce a related coalescence and fragmentation process that leads to a number of open questions.  相似文献   

5.
We consider a random perturbation of a 2-dimensional Hamiltonian ODE. Under an appropriate change of time, we identify a reduced model, which in some aspects is similar to a stochastically averaged model. The novelty of our problem is that the set of critical points of the Hamiltonian has an interior. Thus we can stochastically average outside this set of critical points, but inside we can make no model reduction. The result is a Markov process on a stratified space which looks like a whiskered sphere (i.e, a 2-dimensional sphere with a line attached). At the junction of the sphere and the line, glueing conditions identify the behavior of the Markov process.

  相似文献   


6.
We present a Markov chain Monte Carlo (MCMC) method for generating Markov chains using Markov bases for conditional independence models for a four-way contingency table. We then describe a Markov basis characterized by Markov properties associated with a given conditional independence model and show how to use the Markov basis to generate random tables of a Markov chain. The estimates of exact p-values can be obtained from random tables generated by the MCMC method. Numerical experiments examine the performance of the proposed MCMC method in comparison with the χ 2 approximation using large sparse contingency tables.  相似文献   

7.
The paper deals with a model of the genetic process of recombination, one of the basis mechanisms of generating genetic variability. Mathematically, the model can be represented by the so‐called random evolution of Griego and Hersch, in which a random switching process selects from among several possible modes of operation of a dynamical system. The model, introduced by Polanska and Kimmel, involves mutations in the form of a time‐continuous Markov chain and genetic drift. We demonstrate asymptotic properties of the model under different demographic scenarios for the population in which the process evolves. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
研究任意随机变量序列的强收敛性.利用鞅差序列级数收敛定理,证明了任意随机序列的一个强极限定理,作为推论,得到了马氏过程、鞅差序列及独立随机变量序列的强大数定律.  相似文献   

9.
Abstract

We postulate observations from a Poisson process whose rate parameter modulates between two values determined by an unobserved Markov chain. The theory switches from continuous to discrete time by considering the intervals between observations as a sequence of dependent random variables. A result from hidden Markov models allows us to sample from the posterior distribution of the model parameters given the observed event times using a Gibbs sampler with only two steps per iteration.  相似文献   

10.
We prove necessary and sufficient conditions for the transience of the non-zero states in a non-homogeneous, continuous time Markov branching process. The result is obtained by passing from results about the discrete time skeleton of the continuous time chain to the continuous time chain itself. An alternative proof of a result for continuous time Markov branching processes in random environments is then given, showing that earlier moment conditions were not necessary.  相似文献   

11.
The existing literature contains many examples of mean-field particle systems converging to the distribution of a Markov process conditioned to not hit a given set. In many situations, these mean-field particle systems are failable, meaning that they are not well defined after a given random time. Our first aim is to introduce an original mean-field particle system, which is always well defined and whose large number particle limit is, in all generality, the distribution of a process conditioned to not hit a given set. Under natural conditions on the underlying process, we also prove that the convergence holds uniformly in time as the number of particles goes to infinity. As an illustration, we show that our assumptions are satisfied in the case of a piece-wise deterministic Markov process.  相似文献   

12.
The Virtual Build-to-Order (VBTO) approach strives to allow a producer to fulfil customers with the specific product variants they seek more efficiently than a conventional order fulfilment system. It does so by opening the planning pipeline. Here the feasibility of modelling the VBTO system as a Markov process is investigated. Two system configurations are considered—a random pipeline feed policy that assumes only knowledge of the overall demand pattern and an informed policy that ensures a mix of different variants in the system. First-order Markov models, which assume stationarity requirements are satisfied, are developed for small VBTO systems. The model for the informed feed policy has excellent agreement with simulation results and confirms the superiority of this policy over the random policy. The model for the random policy is more accurate at high variety than at low variety levels. Accuracy is improved with a second-order Markov model. Although impractical for modelling large scale VBTO systems for either configuration, the Markov approach is valuable in providing insights, theoretical foundations and validation for simulation models. It aids the interpretation of observations from simulations of large scale systems and explains the mechanism by which an unrepresentative stock mix develops over time for the random policy.  相似文献   

13.
Markov chains provide us with a powerful probabilistic tool that allows to study the structure of connected graphs in details. The statistics of events for Markov chains defined on connected graphs can be effectively studied by the method of generalized inverses which we review. The approach is also applicable for directed graphs and interacting networks which share the set of nodes. We discuss a generalization of Lévy flight random walks for large complex networks and study the interplay between the nonlinearity of diffusion process and the topological structure of the network.  相似文献   

14.
Markov properties and strong Markov properties for random fields are defined and discussed. Special attention is given to those defined by I. V. Evstigneev. The strong Markov nature of Markov random fields with respect to random domains such as [0, L], where L is a multidimensional extension of a stopping time, is explored. A special case of this extension is shown to generalize a result of Merzbach and Nualart for point processes. As an additional example, Evstigneev's Markov and strong Markov properties are considered for independent increment jump processes.  相似文献   

15.
揭示了带形上随机环境中随机游动的内蕴分枝结构一带移民的多物种分枝过程.利用内蕴分枝结构,可精确表达游动的首次击中时.给出了内蕴分枝结构的如下两个应用:(1)计算出首次击中时的均值,给出游动大数定律速度的显示表达,(2)得到从粒子角度看环境的马氏链不变测度的密度函数的显示表达,进而可用另一种"站在粒子看环境"的方法直接证明游动的大数定律.  相似文献   

16.
It is shown that a mixing Markov chain is a unilateral or one-sided factor of every ergodic process of equal or greater entropy. This extends the work of Sinai, who showed that the result holds for independent processes, and the work of Ornstein and Weiss, who showed that the result holds for mixing Markov chains in which all transition probabilities are positive. The proof exploits the Rothstein-Burton joinings-space formulation of Ornstein’s isomorphism theory, and uses a random coding argument. Partially supported by an NSF Graduate Fellowship, an NSF Postdoctoral Fellowship, and NSF Grant # DMS 84-03182 during the writing of this article.  相似文献   

17.
In this paper, we prove the large deviation principle (LDP) for the occupation measures of not necessarily irreducible random dynamical systems driven by Markov processes. The LDP for not necessarily irreducible dynamical systems driven by i.i.d. sequence is derived. As a further application we establish the LDP for extended hidden Markov models, filling a gap in the literature, and obtain large deviation estimations for the log-likelihood process and maximum likelihood estimator of hidden Markov models.  相似文献   

18.
A new and rather broad class of stationary random tessellations of the d-dimensional Euclidean space is introduced, which we call shape-driven nested Markov tessellations. Locally, these tessellations are constructed by means of a spatio-temporal random recursive split dynamics governed by a family of Markovian split kernel, generalizing thereby the – by now classical – construction of iteration stable random tessellations. By providing an explicit global construction of the tessellations, it is shown that under suitable assumptions on the split kernels (shape-driven), there exists a unique time-consistent whole-space tessellation-valued Markov process of stationary random tessellations compatible with the given split kernels. Beside the existence and uniqueness result, the typical cell and some aspects of the first-order geometry of these tessellations are in the focus of our discussion.  相似文献   

19.
Jacobson and Matthews introduced the most hopeful method known for efficiently generating uniformly distributed random Latin squares. Cameron conjectures that the same Markov chain will also generate all of the other generalized 2‐designs with block size 3 uniformly at random. For a generalization of Latin squares, we give an affirmative result for any admissible parameter values. We also give the first insight and analysis into a generalization of the 1‐factorization of the complete graph by giving an affirmative result for some admissible parameter values. © 2012 Wiley Periodicals, Inc. J. Combin. Designs 20: 368–380, 2012  相似文献   

20.
This work is concerned with weak convergence of non-Markov random processes modulated by a Markov chain. The motivation of our study stems from a wide variety of applications in actuarial science, communication networks, production planning, manufacturing and financial engineering. Owing to various modelling considerations, the modulating Markov chain often has a large state space. Aiming at reduction of computational complexity, a two-time-scale formulation is used. Under this setup, the Markov chain belongs to the class of nearly completely decomposable class, where the state space is split into several subspaces. Within each subspace, the transitions of the Markov chain varies rapidly, and among different subspaces, the Markov chain moves relatively infrequently. Aggregating all the states of the Markov chain in each subspace to a single super state leads to a new process. It is shown that under such aggregation schemes, a suitably scaled random sequence converges to a switching diffusion process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号