首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new approach to constructing generalised probabilities is proposed. It is based on the models using lower and upper previsions, or equivalently, convex sets of probability measures. Our approach uses sets of Markov operators in the role of rules preserving desirability of gambles. The main motivation being the operators of conditional expectations which are usually assumed to reduce riskiness of gambles. Imprecise probability models are then obtained in the ways to be consistent with those desirability preserving rules. The consistency criteria are based on the existing interpretations of models using imprecise probabilities. The classical models based on lower and upper previsions are shown to be a special class of the generalised models. Further, we generalise some standard extension procedures, including the marginal extension and independent products, which can be defined independently of the existing procedures known for standard models.  相似文献   

2.
The aim of this paper is to examine multiple Markov dependence for the discrete as well as for the continuous parameter case. In both cases the Markov property with arbitrary parameter values is investigated and it is shown that it leads to the degeneration of the multiple Markov dependence to the simple one.  相似文献   

3.
In this paper potential theory is developed for finitely additive Markov chains and this is used to obtain various characterization theorems for discrete time Markov chains with an arbitrary state space, with finitely additive stationary transition probabilities and a finitely additive initial distribution.  相似文献   

4.
Handling uncertainty by interval probabilities is recently receiving considerable attention by researchers. Interval probabilities are used when it is difficult to characterize the uncertainty by point-valued probabilities due to partially known information. Most of researches related to interval probabilities, such as combination, marginalization, condition, Bayesian inferences and decision, assume that interval probabilities are known. How to elicit interval probabilities from subjective judgment is a basic and important problem for the applications of interval probability theory and till now a computational challenge. In this work, the models for estimating and combining interval probabilities are proposed as linear and quadratic programming problems, which can be easily solved. The concepts including interval probabilities, interval entropy, interval expectation, interval variance, interval moment, and the decision criteria with interval probabilities are addressed. A numerical example of newsvendor problem is employed to illustrate our approach. The analysis results show that the proposed methods provide a novel and effective alternative for decision making when point-valued subjective probabilities are inapplicable due to partially known information.  相似文献   

5.
6.
We consider the problem of giving explicit spectral bounds for time inhomogeneous Markov chains on a finite state space. We give bounds that apply when there exists a probability ππ such that each of the different steps corresponds to a nice ergodic Markov kernel with stationary measure ππ. For instance, our results provide sharp bounds for models such as semi-random transpositions and semi-random insertions (in these cases ππ is the uniform probability on the symmetric group).  相似文献   

7.
We suggest an approach to obtaining general estimates of stability in terms of special “weighted” norms related to total variation. Two important classes of continuous-time Markov chains are considered for which it is possible to obtain exact convergence rate estimates (and hence, guarantee exact stability estimates): birth–death–catastrophes processes, and queueing models with batch arrivals and group services.  相似文献   

8.
A reduced system is a smaller system derived in the process of analyzing a larger system. While solving for steady-state probabilities of a Markov chain, generally the solution can be found by first solving a reduced system of equations which is obtained by appropriately partitioning the transition probability matrix. In this paper, we catagorize reduced systems as standard and nonstandard and explore the existence of reduced systems and their properties relative to the original system. We also discuss first passage probabilities and means for the standard reduced system relative to the original system. These properties are illustrated while determining the steady-state probabilities and first passage time characteristics of a queueing system.  相似文献   

9.
This paper discusses an efficient method to compute mean passage times and absorption probabilities in Markov and Semi-Markov models. It uses the state reduction approach introduced by Winfried Grassmann for the computation of the stationary distribution of a Markov model. The method is numerically stable and has a simple probabilistic interpretation. It is especially stressed, that the natural frame for the state reduction method is rather Semi-Markov theory than Markov theory.
Zusammenfassung Es wird ein wirkungsvolles Rechenverfahren zur Bestimmung von mittleren Zeiten bis zur Absorption und von Absorptions-Wahrscheinlichkeiten in Markoff- und Semi-Markoff-Modellen dargestellt. Die Methode beruht auf dem Zustands-Reduktions-Ansatz, der von Grassmann für die Berechnung stationärer Verteilungen von Markoff-Ketten eingeführt wurde. Das Verfahren ist numerisch stabil und hat eine einfache wahrscheinlichkeitstheoretische Interpretation. Es wird hervorgehoben, da\ der natürliche Rahmen der Methode eher die Semi-Markoff-Theorie als die Markoff-Theorie ist.
  相似文献   

10.
In this paper, subgeometric ergodicity is investigated for continuous-time Markov chains. Several equivalent conditions, based on the first hitting time or the drift function, are derived as the main theorem. In its corollaries, practical drift criteria are given for ?-ergodicity and computable bounds on subgeometric convergence rates are obtained for stochastically monotone Markov chains. These results are illustrated by examples.  相似文献   

11.
We justify and discuss expressions for joint lower and upper expectations in imprecise probability trees, in terms of the sub- and supermartingales that can be associated with such trees. These imprecise probability trees can be seen as discrete-time stochastic processes with finite state sets and transition probabilities that are imprecise, in the sense that they are only known to belong to some convex closed set of probability measures. We derive various properties for their joint lower and upper expectations, and in particular a law of iterated expectations. We then focus on the special case of imprecise Markov chains, investigate their Markov and stationarity properties, and use these, by way of an example, to derive a system of non-linear equations for lower and upper expected transition and return times. Most importantly, we prove a game-theoretic version of the strong law of large numbers for submartingale differences in imprecise probability trees, and use this to derive point-wise ergodic theorems for imprecise Markov chains.  相似文献   

12.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

13.
Estimation of spectral gap for Markov chains   总被引:7,自引:0,他引:7  
The study of the convergent rate (spectral gap) in theL 2-sense is motivated from several different fields: probability, statistics, mathematical physics, computer science and so on and it is now an active research topic. Based on a new approach (the coupling technique) introduced in [7] for the estimate of the convergent rate and as a continuation of [4], [5], [7–9], [23] and [24], this paper studies the estimate of the rate for time-continuous Markov chains. Two variational formulas for the rate are presented here for the first time for birth-death processes. For diffusions, similar results are presented in an accompany paper [10]. The new formulas enable us to recover or improve the main known results. The connection between the sharp estimate and the corresponding eigenfunction is explored and illustrated by various examples. A previous result on optimal Markovian couplings[4] is also extended in the paper.Research supported in part by NSFC, Qin Shi Sci & Tech. Foundation and the State Education Commission of China.  相似文献   

14.
The isomorphism theorem of Dynkin is definitely an important tool to investigate the problems raised in terms of local times of Markov processes. This theorem concerns continuous time Markov processes. We give here an equivalent version for Markov chains.  相似文献   

15.
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P.  相似文献   

16.
This paper develops bounds on the rate of decay of powers of Markov kernels on finite state spaces. These are combined with eigenvalue estimates to give good bounds on the rate of convergence to stationarity for finite Markov chains whose underlying graph has moderate volume growth. Roughly, for such chains, order (diameter) steps are necessary and suffice to reach stationarity. We consider local Poincaré inequalities and use them to prove Nash inequalities. These are bounds onl 2-norms in terms of Dirichlet forms andl 1-norms which yield decay rates for iterates of the kernel. This method is adapted from arguments developed by a number of authors in the context of partial differential equations and, later, in the study of random walks on infinite graphs. The main results do not require reversibility.  相似文献   

17.
We present a framework for representing a queue at arrival epochs as a Harris recurrent Markov chain (HRMC). The input to the queue is a marked point process governed by a HRMC and the queue dynamics are formulated by a general recursion. Such inputs include the cases of i.i.d, regenerative, Markov modulated, Markov renewal and the output from some queues as well. Since a HRMC is regenerative, the queue inherits the regenerative structure. As examples, we consider split & match, tandem, G/G/c and more general skip forward networks. In the case of i.i.d. input, we show the existence of regeneration points for a Jackson type open network having general service and interarrivai time distributions.A revised version of the author's winning paper of the 1986 George E. Nicholson Prize (awarded by the Operations Research Society of America).  相似文献   

18.
Phylogenetic trees are commonly used to model the evolutionary relationships among a collection of biological species. Over the past fifteen years, the convergence properties for Markov chains defined on phylogenetic trees have been studied, yielding results about the time required for such chains to converge to their stationary distributions. In this work we derive an upper bound on the relaxation time of two Markov chains on rooted binary trees: one defined by nearest neighbor interchanges (NNI) and the other defined by subtree prune and regraft (SPR) moves.  相似文献   

19.
Let \s{Xn, n ? 0\s} and \s{Yn, n ? 0\s} be two stochastic processes such that Yn depends on Xn in a stationary manner, i.e. P(Yn ? A\vbXn) does not depend on n. Sufficient conditions are derived for Yn to have a limiting distribution. If Xn is a Markov chain with stationary transition probabilities and Yn = f(Xn,..., Xn+k) then Yn depends on Xn is a stationary way. Two situations are considered: (i) \s{Xn, n ? 0\s} has a limiting distribution (ii) \s{Xn, n ? 0\s} does not have a limiting distribution and exits every finite set with probability 1. Several examples are considered including that of a non-homogeneous Poisson process with periodic rate function where we obtain the limiting distribution of the interevent times.  相似文献   

20.
带马氏利率的离散时间风险模型的破产概率   总被引:4,自引:0,他引:4  
本文考虑一类保费和理赔额均为随机变量,且利率为马氏链的离散时间风险模型。推出了有限时间和最终时间破产概率的递归方程,并用归纳法得到了最终时间破产概率的上界表达式。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号