首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Suppose {Pn(x, A)} denotes the transition law of a general state space Markov chain {Xn}. We find conditions under which weak convergence of {Xn} to a random variable X with law L (essentially defined by ∝ Pn(x, dy) g(y) → ∝ L(dy) g(y) for bounded continuous g) implies that {Xn} tends to X in total variation (in the sense that ∥ Pn(x, .) ? L ∥ → 0), which then shows that L is an invariant measure for {Xn}. The conditions we find involve some irreducibility assumptions on {Xn} and some continuity conditions on the one-step transition law {P(x, A)}.  相似文献   

4.
5.
Summary A continuous-parameter Markov process on a general state space has transition function P t (x,E). The theory of regenerative phenomena is applied to the question: what functions of t can arise in this way? Particular attention is paid to processes of purely discontinuous type, to which are extended known results for processes with a countable state space.  相似文献   

6.
Summary We consider a homogeneous Markov chain with discrete parameter and a finite set of states. At every state we suppose a finite set of possible controls. The optimal policy securing the minimum, respectively maximum sum of certain stationary absolute probabilities will be searched under the condition that certain cost associated with a control stays at given boundaries. It is shown that the optimal policy does not belong necessarily to the class of pure policies. This problem is applicable, e.g. in a control of the quality of products through a financial reward of the workers.
Zusammenfassung Wir betrachten eine homogene Markoffkette (MK) mit diskretem Zeitparameter und endlichem Zustandsraum. Die MK kann gesteuert werden, und zwar stehen in jedem Zustand endlich viele Aktionen zur Wahl, eine Strategie (Politik) ist gegeben durch die Angabe einer Aktion für jeden Zustand. Unter entsprechenden Voraussetzungen besitzen die den Strategien zugeordneten MK jeweils eine eindeutig bestimmte stationäre Verteilung. Ziel ist die Bestimmung einer Strategie, die eine Summe von Wahrscheinlichkeiten für vorgegebene Zustände bzgl. der zugehörigen stationären Verteilung minimiert (oder maximiert) unter der Nebenbedingung, daß mit der Kontrolle verbundene Kosten eine vorgegebene Schranke nicht überschreiten. Die Lösung erfolgt mittels linearer Optimierung. Ein Beispiel zeigt, daß die optimale Strategie i.a. eine gemischte Strategie ist.
  相似文献   

7.
Much work has focused on developing exact tests for the analysis of discrete data using log linear or logistic regression models. A parametric model is tested for a dataset by conditioning on the value of a sufficient statistic and determining the probability of obtaining another dataset as extreme or more extreme relative to the general model, where extremeness is determined by the value of a test statistic such as the chi-square or the log-likelihood ratio. Exact determination of these probabilities can be infeasible for high dimensional problems, and asymptotic approximations to them are often inaccurate when there are small data entries and/or there are many nuisance parameters. In these cases Monte Carlo methods can be used to estimate exact probabilities by randomly generating datasets (tables) that match the sufficient statistic of the original table. However, naive Monte Carlo methods produce tables that are usually far from matching the sufficient statistic. The Markov chain Monte Carlo method used in this work (the regression/attraction approach) uses attraction to concentrate the distribution around the set of tables that match the sufficient statistic, and uses regression to take advantage of information in tables that “almost” match. It is also more general than others in that it does not require the sufficient statistic to be linear, and it can be adapted to problems involving continuous variables. The method is applied to several high dimensional settings including four-way tables with a model of no four-way interaction, and a table of continuous data based on beta distributions. It is powerful enough to deal with the difficult problem of four-way tables and flexible enough to handle continuous data with a nonlinear sufficient statistic.  相似文献   

8.
9.
10.
This paper discusses an efficient method to compute mean passage times and absorption probabilities in Markov and Semi-Markov models. It uses the state reduction approach introduced by Winfried Grassmann for the computation of the stationary distribution of a Markov model. The method is numerically stable and has a simple probabilistic interpretation. It is especially stressed, that the natural frame for the state reduction method is rather Semi-Markov theory than Markov theory.
Zusammenfassung Es wird ein wirkungsvolles Rechenverfahren zur Bestimmung von mittleren Zeiten bis zur Absorption und von Absorptions-Wahrscheinlichkeiten in Markoff- und Semi-Markoff-Modellen dargestellt. Die Methode beruht auf dem Zustands-Reduktions-Ansatz, der von Grassmann für die Berechnung stationärer Verteilungen von Markoff-Ketten eingeführt wurde. Das Verfahren ist numerisch stabil und hat eine einfache wahrscheinlichkeitstheoretische Interpretation. Es wird hervorgehoben, da\ der natürliche Rahmen der Methode eher die Semi-Markoff-Theorie als die Markoff-Theorie ist.
  相似文献   

11.
We consider a Markov Chain in which the states are fuzzy subsets defined on some finite state space. Building on the relationship between set-valued Markov chains to the Dempster-Shafer combination rule, we construct a procedure for finding transition probabilities from one fuzzy state to another. This construction involves Dempster-Shafer type mass functions having fuzzy focal elements. It also involves a measure of the degree to which two fuzzy sets are equal. We also show how to find approximate transition probabilities from a fuzzy state to a crisp state in the original state space  相似文献   

12.
In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.  相似文献   

13.
14.
15.
16.
A batch Markov arrival process (BMAP) X* = (N, J) is a 2-dimensional Markov process with two components, one is the counting process N and the other one is the phase process J. It is proved that the phase process is a time-homogeneous Markov chain with a finite state-space, or for short, Markov chain. In this paper, a new and inverse problem is proposed firstly: given a Markov chain J, can we deploy a process N such that the 2-dimensional process X* = (N, J) is a BMAP? The process X* = (N, J) is said to be an adjoining BMAP for the Markov chain J. For a given Markov chain the adjoining processes exist and they are not unique. Two kinds of adjoining BMAPs have been constructed. One is the BMAPs with fixed constant batches, the other one is the BMAPs with independent and identically distributed (i.i.d) random batches. The method we used in this paper is not the usual matrix-analytic method of studying BMAP, it is a path-analytic method. We constructed directly sample paths of adjoining BMAPs. The expressions of characteristic (D k , k = 0, 1, 2 · · ·) and transition probabilities of the adjoining BMAP are obtained by the density matrix Q of the given Markov chain J. Moreover, we obtained two frontal Theorems. We present these expressions in the first time.  相似文献   

17.
18.
We consider the smoothing probabilities of hidden Markov model (HMM). We show that under fairly general conditions for HMM, the exponential forgetting still holds, and the smoothing probabilities can be well approximated with the ones of double-sided HMM. This makes it possible to use ergodic theorems. As an application we consider the pointwise maximum a posteriori segmentation, and show that the corresponding risks converge.  相似文献   

19.
We present several alternative computation schemes, accompanied with appropriate software, to compute the probabilities of the (2n+1) possible match levels between the alleles in n genetic sites of a given individual, and the alleles in the same n sites of an individual who is drawn randomly from a given population. The modeling generalizes the asymmetric HLA-criterion which defines the donor-recipient immunological compatibility in kidney or bone-marrow transplantation. We discuss our algorithms by order of their run-time complexity with respect to n. We show the advantage of using computational schemes over explicit expressions even for the HLA present-day count of n=3.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号