首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 467 毫秒
1.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

2.
We propose the construction of a quantum Markov chain that corresponds to a “forward” quantum Markov chain. In the given construction, the quantum Markov chain is defined as the limit of finite-dimensional states depending on the boundary conditions. A similar construction is widely used in the definition of Gibbs states in classical statistical mechanics. Using this construction, we study the quantum Markov chain associated with an XY-model on a Cayley tree. For this model, within the framework of the given construction, we prove the uniqueness of the quantum Markov chain, i.e., we show that the state is independent of the boundary conditions.  相似文献   

3.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

4.
In this paper circuit chains of superior order are defined as multiple Markov chains for which transition probabilities are expressed in terms of the weights of a finite class of circuits in a finite set, in connection with kinetic properties along the circuits. Conversely, it is proved that if we join any finite doubly infinite strictly stationary Markov chain of order r for which transitions hold cyclically with a second chain with the same transitions for the inverse time-sense, then they may be represented as circuit chains of order r.  相似文献   

5.
In the present, we introduce and study the G-\mathcal{G-}inhomogeneous Markov system of high order, which is a more general in many respects stochastic process than the known inhomogeneous Markov system. We define the inhomogeneous superficial razor cut mixture transition distribution model extending for the homogeneous case the idea of the mixture transition model. With the introduction of the appropriate vector stochastic process and the establishment of relationships among them, we study the asymptotic behaviour of the G-\mathcal{G-}inhomogeneous Markov system of high order. In the form of two theorems, the asymptotic behaviour of the inherent G-\mathcal{G-}inhomogeneous Markov chain and the expected and relative expected population structure of the G-\mathcal{G-}inhomogeneous Markov system of high order, are provided under assumptions easily met in practice. Finally, we provide an illustration of the present results in a manpower system.  相似文献   

6.
Summary We study a Markovian process, the state space of which is the product of a set ofn points and the realx-axids. Under certain regularity conditions this study is equivalent to investigating the solution of a set of couple diffusion equations, generalization of the Fokker-Planck (or second Kolmogorov) equation. Assuming the process homogeneous inx, but in general time-inhomogeneous, this set of equations is studied with the help of the Fourier transformation. The marginal distribution in then discrete states corresponds to a time-inhomogeneousn-state Markov chain in continuous time. The properties of such a Markov chain are studied, especially the asymptotic behaviour in the time-periodic case. We obtain a natural generalization of the well-known asymptotic behaviour in the time-homogeneous case, finding a subdivision of the states into groups of essential states, the distribution inside easch group being asymptotically periodic and independent of the starting distribution. Next, still assuming time-periodicity, we study the asymptotic behaviour of the complete Markovian process, showing that inside each of the groups mentioned above the distribution approaches a common normal distribution inx-space, with mean value and variance proportional tot. Explicit expressions for the proportionality factors are derived. The general theory is applied to the electrodiffusion equations, corresponding ton=2.  相似文献   

7.
This paper is concerned with the circumstances under which a discrete-time absorbing Markov chain has a quasi-stationary distribution. We showed in a previous paper that a pure birth-death process with an absorbing bottom state has a quasi-stationary distribution—actually an infinite family of quasi-stationary distributions— if and only if absorption is certain and the chain is geometrically transient. If we widen the setting by allowing absorption in one step (killing) from any state, the two conditions are still necessary, but no longer sufficient. We show that the birth–death-type of behaviour prevails as long as the number of states in which killing can occur is finite. But if there are infinitely many such states, and if the chain is geometrically transient and absorption certain, then there may be 0, 1, or infinitely many quasi-stationary distributions. Examples of each type of behaviour are presented. We also survey and supplement the theory of quasi-stationary distributions for discrete-time Markov chains in general.   相似文献   

8.
We analyze sequences of letters on a ring. Our objective is to determine the statistics of the occurrences of a set of r‐letter words when the sequence is chosen as a periodic Markov chain of order ≤ r ? 1. We first obtain a generating function for the associated probability distribution and then display its Poisson limit. For an i.i.d. letter sequence, correction terms to the Poisson limit are given. Finally, we indicate how a hidden Markov chain fits into this scheme. © 2005 Wiley Periodicals, Inc.  相似文献   

9.
Asymptotic properties of singularly perturbed Markov chains having measurable and/or continuous generators are developed in this work. The Markov chain under consideration has a finite-state space and is allowed to be nonstationary. Its generator consists of a rapidly varying part and a slowly changing part. The primary concerns are on the properties of the probability vectors and an aggregated process that depend on the characteristics of the fast varying part of the generators. The fast changing part of the generators can either consist of l recurrent classes, or include also transient states in addition to the recurrent classes. The case of inclusion of transient states is examined in detail. Convergence of the probability vectors under the weak topology of L2 is obtained first. Then under slightly stronger conditions, it is shown that the convergence also takes place pointwise. Moreover, convergence under the norm topology of L2 is derived. Furthermore, a process with aggregated states is obtained which converges to a Markov chain in distribution.  相似文献   

10.
We study general geometric techniques for bounding the spectral gap of a reversible Markov chain. We show that the best bound obtainable using these techniques can be computed in polynomial time via semidefinite programming, and is off by at most a factor of order log2n, where n is the number of states. © 1997 John Wiley & Sons, Inc. Random Struct. Alg., 11 , 299–313 (1997)  相似文献   

11.
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M (≥ 2) states.  相似文献   

12.
13.
14.
The previous results describing the generalization ability of Empirical Risk Minimization (ERM) algorithm are usually based on the assumption of independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the first exponential bound on the rate of uniform convergence of the ERM algorithm with V-geometrically ergodic Markov chain samples, as the application of the bound on the rate of uniform convergence, we also obtain the generalization bounds of the ERM algorithm with V-geometrically ergodic Markov chain samples and prove that the ERM algorithm with V-geometrically ergodic Markov chain samples is consistent. The main results obtained in this paper extend the previously known results of i.i.d. observations to the case of V-geometrically ergodic Markov chain samples.  相似文献   

15.
Markov chain theory is proving to be a powerful approach to bootstrap finite states processes, especially where time dependence is non linear. In this work we extend such approach to bootstrap discrete time continuous-valued processes. To this purpose we solve a minimization problem to partition the state space of a continuous-valued process into a finite number of intervals or unions of intervals (i.e. its states) and identify the time lags which provide “memory” to the process. A distance is used as objective function to stimulate the clustering of the states having similar transition probabilities. The problem of the exploding number of alternative partitions in the solution space (which grows with the number of states and the order of the Markov chain) is addressed through a Tabu Search algorithm. The method is applied to bootstrap the series of the German and Spanish electricity prices. The analysis of the results confirms the good consistency properties of the method we propose.  相似文献   

16.
Members of a population of fixed size N can be in any one of n states. In discrete time the individuals jump from one state to another, independently of each other, and with probabilities described by a homogeneous Markov chain. At each time a sample of size M is withdrawn, (with replacement). Based on these observations, and using the techniques of Hidden Markov Models, recursive estimates for the distribution of the population are obtained  相似文献   

17.
In this article, we provide predictable and chaotic representations for Itô–Markov additive processes X. Such a process is governed by a finite-state continuous time Markov chain J which allows one to modify the parameters of the Itô-jump process (in so-called regime switching manner). In addition, the transition of J triggers the jump of X distributed depending on the states of J just prior to the transition. This family of processes includes Markov modulated Itô–Lévy processes and Markov additive processes. The derived chaotic representation of a square-integrable random variable is given as a sum of stochastic integrals with respect to some explicitly constructed orthogonal martingales. We identify the predictable representation of a square-integrable martingale as a sum of stochastic integrals of predictable processes with respect to Brownian motion and power-jumps martingales related to all the jumps appearing in the model. This result generalizes the seminal result of Jacod–Yor and is of importance in financial mathematics. The derived representation then allows one to enlarge the incomplete market by a series of power-jump assets and to price all market-derivatives.  相似文献   

18.
This article investigates the problem of the definition and computation of an H2-type norm for discrete-time time-varying periodic stochastic linear systems simultaneously affected by multiplicative white noise perturbations and random jumping according to a Markov chain with an infinite countable number of states. Also, we solve an optimization problem that contains, as a special case, the H2 optimal control problem for the considered class of stochastic systems under the assumption of perfect state measurements.  相似文献   

19.
In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.  相似文献   

20.
The limit behavior of Markov chains with discrete time and a finite number of states (MCDT) depending on the number n of its steps has been almost completely investigated [1–4]. In [5], MCDT with forbidden transitions were investigated, and in [6], the sum of a random number of functionals of random variables related by a homogeneous Markov chain (HMC) was considered. In the present paper, we continue the investigation of the limit behavior of the MCDT with random stopping time which is determined by a Markov walk plan II with a fixed number of certain transitions [7, 8]. Here we apply a method similar to that of [6], which allows us to obtain, together with some generalizations of the results of [6], a number of new assertions. Translated fromStatisticheskie Metody Otsenivaniya i Proverki Gipotez, pp. 119–130, Perm, 1990.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号