首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a discrete-time Markov chain on the non-negative integers with drift to infinity and study the limiting behavior of the state probabilities conditioned on not having left state 0 for the last time. Using a transformation, we obtain a dual Markov chain with an absorbing state such that absorption occurs with probability 1. We prove that the state probabilities of the original chain conditioned on not having left state 0 for the last time are equal to the state probabilities of its dual conditioned on non-absorption. This allows us to establish the simultaneous existence, and then equivalence, of their limiting conditional distributions. Although a limiting conditional distribution for the dual chain is always a quasi-stationary distribution in the usual sense, a similar statement is not possible for the original chain.  相似文献   

2.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

3.
In this article, we study the error covariance of the recursive Kalman filter when the parameters of the filter are driven by a Markov chain taking values in a countably infinite set. We do not assume ergodicity nor require the existence of limiting probabilities for the Markov chain. The error covariance matrix of the filter depends on the Markov state realizations, and hence forms a stochastic process. We show in a rather direct and comprehensive manner that this error covariance process is mean bounded under the standard stochastic detectability concept. Illustrative examples are included.  相似文献   

4.
We consider discrete-time single-server queues fed by independent, heterogeneous sources with geometrically distributed idle periods. While being active, each source generates some cells depending on the state of the underlying Markov chain. We first derive a general and explicit formula for the mean buffer contents in steady state when the underlying Markov chain of each source has finite states. Next we show the applicability of the general formula to queues fed by independent sources with infinite-state underlying Markov chains and discrete phase-type active periods. We then provide explicit formulas for the mean buffer contents in queues with Markovian autoregressive sources and greedy sources. Further we study two limiting cases in general settings, one is that the lengths of active periods of each source are governed by an infinite-state absorbing Markov chain, and the other is the model obtained by the limit such that the number of sources goes to infinity under an appropriate normalizing condition. As you will see, the latter limit leads to a queue with (generalized) M/G/∞ input sources. We provide sufficient conditions under which the general formula is applicable to these limiting cases.AMS subject classification: 60K25, 60K37, 60J10This revised version was published online in June 2005 with corrected coverdate  相似文献   

5.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

6.
研究了马氏环境中的可数马氏链,主要证明了过程于小柱集上的回返次数是渐近地服从Poisson分布。为此,引入熵函数h,首先给出了马氏环境中马氏链的Shannon-Mc Millan-Breiman定理,还给出了一个非马氏过程Posson逼近的例子。当环境过程退化为一常数序列时,便得到可数马氏链的Poisson极限定理。这是有限马氏链Pitskel相应结果的拓广。  相似文献   

7.
Abstract

The problem of the mean square exponential stability for a class of discrete-time linear stochastic systems subject to independent random perturbations and Markovian switching is investigated. The case of the linear systems whose coefficients depend both to present state and the previous state of the Markov chain is considered. Three different definitions of the concept of exponential stability in mean square are introduced and it is shown that they are not always equivalent. One definition of the concept of mean square exponential stability is done in terms of the exponential stability of the evolution defined by a sequence of linear positive operators on an ordered Hilbert space. The other two definitions are given in terms of different types of exponential behavior of the trajectories of the considered system. In our approach the Markov chain is not prefixed. The only available information about the Markov chain is the sequence of probability transition matrices and the set of its states. In this way one obtains that if the system is affected by Markovian jumping the property of exponential stability is independent of the initial distribution of the Markov chain.

The definition expressed in terms of exponential stability of the evolution generated by a sequence of linear positive operators, allows us to characterize the mean square exponential stability based on the existence of some quadratic Lyapunov functions.

The results developed in this article may be used to derive some procedures for designing stabilizing controllers for the considered class of discrete-time linear stochastic systems in the presence of a delay in the transmission of the data.  相似文献   

8.
宋明珠  吴永锋 《数学杂志》2015,35(2):368-374
本文研究了马氏随机环境中马氏双链函数的强大数定律.利用将双链函数进行分段研究的方法,获得了马氏环境中马氏双链函数强大数定律成立的一个充分条件.运用该定律,推导出马氏双链从一个状态到另一个状态转移概率的极限性质,进而推广了马氏双链的极限性质.  相似文献   

9.
A multi-server retrial queueing model with Batch Markovian Arrival Process and phase-type service time distribution is analyzed. The continuous-time multi-dimensional Markov chain describing the behavior of the system is investigated by means of reducing it to the corresponding discrete-time multi-dimensional Markov chain. The latter belongs to the class of multi-dimensional quasi-Toeplitz Markov chains in the case of a constant retrial rate and to the class of multi-dimensional asymptotically quasi-Toeplitz Markov chains in the case of an infinitely increasing retrial rate. It allows to obtain the existence conditions for the stationary distribution and to elaborate the algorithms for calculating the stationary state probabilities.  相似文献   

10.
In this paper, we present a Markovian modelling framework that can describe any serial production system with rework. Each production stage is represented by a state in the Markov chain. Absorbing states indicate the events of scrapping a product at a production stage or the completion of the finished product. Generalizable formulae for the final absorption probabilities are derived that represent: (1) the probability that an unfinished product is scrapped at a certain production stage and (2) the yield of the system. We also derive various expected costs and quantities associated with all products ending in any absorbing state, as well as the equivalent costs and quantities for finished products. The applicability of our modelling framework is demonstrated in a real-life manufacturing environment in the food-packing industry.  相似文献   

11.
This paper studies the synthesis of controllers for discrete-time, continuous state stochastic systems subject to omega-regular specifications using finite-state abstractions. Omega-regular properties allow specifying complex behaviors and encompass, for example, linear temporal logic. First, we present a synthesis algorithm for minimizing or maximizing the probability that a discrete-time switched stochastic system with a finite number of modes satisfies an omega-regular property. Our approach relies on a finite-state abstraction of the underlying dynamics in the form of a Bounded-parameter Markov Decision Process arising from a finite partition of the system’s domain. Such Markovian abstractions allow for a range of probabilities of transition between states for each selected action representing a mode of the original system. Our method is built upon an analysis of the Cartesian product between the abstraction and a Deterministic Rabin Automaton encoding the specification of interest or its complement. Specifically, we show that synthesis can be decomposed into a qualitative problem, where the so-called greatest permanent winning components of the product automaton are created, and a quantitative problem, which requires maximizing the probability of reaching this component in the worst-case instantiation of the transition intervals. Additionally, we propose a quantitative metric for measuring the quality of the designed controller with respect to the continuous abstracted states and devise a specification-guided domain partition refinement heuristic with the objective of reaching a user-defined optimality target. Next, we present a method for computing control policies for stochastic systems with a continuous set of available inputs. In this case, the system is assumed to be affine in input and disturbance, and we derive a technique for solving the qualitative and quantitative problems in the resulting finite-state abstractions of such systems. For this, we introduce a new type of abstractions called Controlled Interval-valued Markov Chains. Specifically, we show that the greatest permanent winning component of such abstractions are found by appropriately partitioning the continuous input space in order to generate a bounded-parameter Markov decision process that accounts for all possible qualitative transitions between the finite set of states. Then, the problem of maximizing the probability of reaching these components is cast as a (possibly non-convex) optimization problem over the continuous set of available inputs. A metric of quality for the synthesized controller and a partition refinement scheme are described for this framework as well. Finally, we present a detailed case study.  相似文献   

12.
The paper outlines a case for taking greater interest in the bottomless, or infinitely deep, dam model in Hydrology. It then shows that for such a model with unit withdrawals and an ergodic Markov chain input process the limiting distribution of depletion, when this exists, is a zero modified geometric distribution. This result generalises the well known result for independent inputs. The technical conditions required for the proof are satisfied for finite state space input processes and are shown to be satisfied by certain infinite state space input processes. These include as special cases examples which have a negative binomial limiting input distribution.  相似文献   

13.
本文研究了马氏环境中的马氏链,利用马氏双链的性质,得到了马氏环境中的马氏链回返于小柱集上的概率的若干估计式.  相似文献   

14.
In this paper, we present a host-parasitoid model with correlated events. We apply a block-structured state-dependent (BSDE) approach that provides a methodological tool to model state-dependent Markovian transitions operating in the presence of phases. A particularly appealing feature of the resulting BSDE host-parasitoid model is that it allows us to deal with non-exponential distributional assumptions on a host birth, a parasitoid death, and parasitism, but keeping the dimensionality of the underlying block-structured Markov chain tractable. Numerical examples are presented to illustrate the effects of the correlation structure on the expected extinction times and the extinction probabilities.  相似文献   

15.
16.
We extend the model in [Korn, R., Rogers, L.C.G., 2005. Stock paying discrete dividends: modelling and option pricing. Journal of Derivatives 13, 44–49] for (discrete) dividend processes to incorporate the dependence of assets on the market mode or the state of the economy, where the latter is modeled by a hidden finite-state Markov chain. We then derive the resulting dynamics of the stock price and various option-pricing formulae. It turns out that the stock price jumps not only at the time of the dividend payment, but also when the underlying Markov chain jumps.  相似文献   

17.
In this paper, we derive the stochastic maximum principle for optimal control problems of the forward-backward Markovian regime-switching system. The control system is described by an anticipated forward-backward stochastic pantograph equation and modulated by a continuous-time finite-state Markov chain. By virtue of classical variational approach, duality method, and convex analysis, we obtain a stochastic maximum principle for the optimal control.  相似文献   

18.
Summary The countable state space of a Markov chain whose stationary transition probabilities satisfy the continuity condition (1.5) is compactifled to get a state space on which the corresponding processes can be made right continuous with left limits, and strongly Markovian. There is a form of quasi left continuity, modified by the possible presence of branch points. Excessive functions are investigated.  相似文献   

19.
A maximum out forest of a digraph is its spanning subgraph that consists of disjoint diverging trees and has the maximum possible number of arcs. For an arbitrary weighted digraph, we consider a matrix of specific weights of maximum out forests and demonstrate how this matrix can be used to get a graph-theoretic interpretation for the limiting probabilities of Markov chains. For a special (nonclassical) correspondence between Markov chains and weighted digraphs, the matrix of Cesáro limiting transition probabilities of any finite homogeneous Markov chain coincides with the normalized matrix of maximum out forests of the corresponding digraphs. This provides a finite (combinatorial) method to calculate the limiting probabilities of Markov chains and thus their stationary distributions. On the other hand, the Markov chain technique provides the proofs to some statements about digraphs.  相似文献   

20.
莫晓云 《经济数学》2010,27(3):28-34
在客户发展关系的Markov链模型的基础上,构建了企业的客户回报随机过程.证明了:在适当假设下,客户回报过程是Markov链。甚至是时间齐次的Markov链.本文求出了该链的转移概率.通过转移概率得到了客户给企业期望回报的一些计算公式,从而为企业选定发展客户关系策略提供了有效的量化基础.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号