首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Let ℬ n be the family of extended binary tress withn internal nodes and assume that eacht ∈ n has equal probability. We identify the asymptotic distribution of the height of leafi (where the leaves are enumerated from left to right) asi andn tend to infinity, such thati/n tends tox λ]0, 1[, as a Maxwell distribution. A generalization of the used combinatorial resp. probabilistic considerations is indicated.  相似文献   

2.
ABSTRACT

The asymptotic equipartition property is a basic theorem in information theory. In this paper, we study the strong law of large numbers of Markov chains in single-infinite Markovian environment on countable state space. As corollary, we obtain the strong laws of large numbers for the frequencies of occurrence of states and ordered couples of states for this process. Finally, we give the asymptotic equipartition property of Markov chains in single-infinite Markovian environment on countable state space.  相似文献   

3.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

4.
In the present, we introduce and study the G-\mathcal{G-}inhomogeneous Markov system of high order, which is a more general in many respects stochastic process than the known inhomogeneous Markov system. We define the inhomogeneous superficial razor cut mixture transition distribution model extending for the homogeneous case the idea of the mixture transition model. With the introduction of the appropriate vector stochastic process and the establishment of relationships among them, we study the asymptotic behaviour of the G-\mathcal{G-}inhomogeneous Markov system of high order. In the form of two theorems, the asymptotic behaviour of the inherent G-\mathcal{G-}inhomogeneous Markov chain and the expected and relative expected population structure of the G-\mathcal{G-}inhomogeneous Markov system of high order, are provided under assumptions easily met in practice. Finally, we provide an illustration of the present results in a manpower system.  相似文献   

5.
Recursive equations are derived for the conditional distribution of the state of a Markov chain, given observations of a function of the state. Mainly continuous time chains are considered. The equations for the conditional distribution are given in matrix form and in differential equation form. The conditional distribution itself forms a Markov process. Special cases considered are doubly stochastic Poisson processes with a Markovian intensity, Markov chains with a random time, and Markovian approximations of semi-Markov processes. Further the results are used to compute the Radon-Nikodym derivative for two probability measures for a Markov chain, when a function of the state is observed.  相似文献   

6.
A Markov process on a compact metric space,X is given by random transformations.S is a finite set of continuous transformations ofX to itself. A random evolution onX is defined by lettingx inX evolve toT(x) forT inS with probability that depends onx andT but is independent of any other past measurable events. This type of model is often called a place dependent iterated function system. The transformations are assumed to have either monotone or contractive properties. Theorems are given to describe the number and types of ergodic invariant measures. Special emphasis is given to learning models and self-reinforcing random walks.Supported in part by AFOSR Grant No. 91-0215, the Alexander von Humboldt Foundation and SFB 170, University of Göttingen.  相似文献   

7.
Let x1,...,xn be random variables connected into a homogeneous Markov chain. The asymptotic behavior of the distribution of the number of overliers is investigated for unknown parameters a, , and . Bibliography: 4 titles.  相似文献   

8.
研究了马氏环境中的可数马氏链,主要证明了过程于小柱集上的回返次数是渐近地服从Poisson分布。为此,引入熵函数h,首先给出了马氏环境中马氏链的Shannon-Mc Millan-Breiman定理,还给出了一个非马氏过程Posson逼近的例子。当环境过程退化为一常数序列时,便得到可数马氏链的Poisson极限定理。这是有限马氏链Pitskel相应结果的拓广。  相似文献   

9.
Let {Z t ,t≥1} be a sequence of trials taking values in a given setA={0, 1, 2,...,m}, where we regard the value 0 as failure and the remainingm values as successes. Let ε be a (single or compound) pattern. In this paper, we provide a unified approach for the study of two joint distributions, i.e., the joint distribution of the numberX n of occurrences of ε, the numbers of successes and failures inn trials and the joint distribution of the waiting timeT r until ther-th occurrence of ε, the numbers of successes and failures appeared at that time. We also investigate some distributions as by-products of the two joint distributions. Our methodology is based on two types of the random variablesX n (a Markov chain imbeddable variable of binomial type and a Markov chain imbeddable variable of returnable type). The present work develops several variations of the Markov chain imbedding method and enables us to deal with the variety of applications in different fields. Finally, we discuss several practical examples of our results. This research was partially supported by the ISM Cooperative Research Program (2002-ISM·CRP-2007).  相似文献   

10.
The coalescent     
The n-coalescent is a continuous-time Markov chain on a finite set of states, which describes the family relationships among a sample of n members drawn from a large haploid population. Its transition probabilities can be calculated from a factorization of the chain into two independent components, a pure death process and a discrete-time jump chain. For a deeper study, it is useful to construct a more complicated Markov process in which n-coalescents for all values of n are embedded in a natural way.  相似文献   

11.
This paper is concerned with long-run average cost minimization of a stochastic inventory problem with Markovian demand, fixed ordering cost, and convex surplus cost. The states of the Markov chain represent different possible states of the environment. Using a vanishing discount approach, a dynamic programming equation and the corresponding verification theorem are established. Finally, the existence of an optimal state-dependent (s, S) policy is proved.  相似文献   

12.
Gennadi Falin  Anatoli Falin 《TOP》1999,7(2):279-291
M/G/1 type queueing systems whose arrival rate is a function of an independent continuous time Markov chain are considered. We suggest a simple analytical approach which allows rigorous mathematical analysis of the stationary characteristics under heavy traffic. Their asymptotic behaviour is described in terms of characteristics of the modulating process (defined as a solution of a set of linear algebraic equations). The analysis is based on certain “semi-explicit” formulas for the performance characteristics. This research was supported by INTAS under grant No. 96-0828.  相似文献   

13.
We present numerical methods for obtaining the stationary distribution of states for multi-server retrial queues with Markovian arrival process, phase type service time distribution with two states and finite buffer; and moments of the waiting time. The methods are direct extensions of the ones for the single server retrial queues earlier developed by the authors. The queue is modelled as a level dependent Markov process and the generator for the process is approximated with one which is spacially homogeneous above some levelN. The levelN is chosen such that the probability associated with the homogeneous part of the approximated system is bounded by a small tolerance and the generator is eventually truncated above that level. Solutions are obtained by efficient application of block Gaussian elimination.  相似文献   

14.
Summary LetD be a bounded domain inR d with regular boundary. LetX=(Xt, Px) be a standard Markov process inD with continuous paths up to its lifetime. IfX satisfies some weak conditions, then it is possible to add a non-local part to its generator, and construct the corresponding standard Markov process inD with Brownian exit distributions fromD.This work was done while the author was an Alexander von Humboldt fellow at the Universität des Saarlandes in Saarbrücken, Germany  相似文献   

15.
The paper considers the problem of the optimum preventive maintenance of the system with two elements and one re¬storing device. The system 's behaviour is described by a semi-Markov process with complex descrete and continuous set of states. Whether the preventive maintenance is performed depends on the state of the system's elements and the level of past life of the element which performs the duties of the main element. The paper contains the solution to the system of integral equations relative to the stationary distribution of a Markov chain included in given semi-Markov process. This results makes it possible to find various-stationary functionals on the process trajectories and lead the task of optimum control to searching the extreme value of given function of two real variables  相似文献   

16.
A setL of points in thed-spaceE d is said toilluminate a familyF={S 1, ...,S n } ofn disjoint compact sets inE d if for every setS i inF and every pointx in the boundary ofS i there is a pointv inL such thatv illuminatesx, i.e. the line segment joiningv tox intersects the union of the elements ofF in exactly {x}.The problem we treat is the size of a setS needed to illuminate a familyF={S 1, ...,S n } ofn disjoint compact sets inE d . We also treat the problem of putting these convex sets in mutually disjoint convex polytopes, each one having at most a certain number of facets.  相似文献   

17.
An approximation of Markov type queueing models with fast Markov switches by Markov models with averaged transition rates is studied. First, an averaging principle for two-component Markov process (x n (t), n (t)) is proved in the following form: if a component x n () has fast switches, then under some asymptotic mixing conditions the component n () weakly converges in Skorokhod space to a Markov process with transition rates averaged by some stationary measures constructed by x n (). The convergence of a stationary distribution of (x n (), n ()) is studied as well. The approximation of state-dependent queueing systems of the type M M,Q /M M,Q /m/N with fast Markov switches is considered.  相似文献   

18.
Breuer  Lothar 《Queueing Systems》2001,38(1):67-76
In queueing theory, most models are based on time-homogeneous arrival processes and service time distributions. However, in communication networks arrival rates and/or the service capacity usually vary periodically in time. In order to reflect this property accurately, one needs to examine periodic rather than homogeneous queues. In the present paper, the periodic BMAP/PH/c queue is analyzed. This queue has a periodic BMAP arrival process, which is defined in this paper, and phase-type service time distributions. As a Markovian queue, it can be analysed like an (inhomogeneous) Markov jump process. The transient distribution is derived by solving the Kolmogorov forward equations. Furthermore, a stability condition in terms of arrival and service rates is proven and for the case of stability, the asymptotic distribution is given explicitly. This turns out to be a periodic family of probability distributions. It is sketched how to analyze the periodic BMAP/M t /c queue with periodically varying service rates by the same method.  相似文献   

19.
A fixed point sequence is singular if the Jacobian matrix at the limit has 1 as an eigenvalue. The asymptotic behaviour of some singular fixed point sequences in one dimension are extended toN dimensions. Three algorithms extrapolating singular fixed point sequences inN dimensions are given. Using numerical examples three algorithms are tested and compared.  相似文献   

20.
In this paper, we extend the previous Markov-modulated reflected Brownian motion model discussed in [1] to a Markov-modulated reflected jump diffusion process, where the jump component is described as a Markov-modulated compound Poisson process. We compute the joint stationary distribution of the bivariate Markov jump process. An abstract example with two states is given to illustrate how the stationary equation described as a system of ordinary integro-differential equations is solved by choosing appropriate boundary conditions. As a special case, we also give the sationary distribution for this Markov jump process but without Markovian regime-switching.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号