首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we develop an algorithmic method for the evaluation of the steady state probability vector of a special class of finite state Markov chains. For the class of Markov chains considered here, it is assumed that the matrix associated with the set of linear equations for the steady state probabilities possess a special structure, such that it can be rearranged and decomposed as a sum of two matrices, one lower triangular with nonzero diagonal elements, and the other an upper triangular matrix with only very few nonzero columns. Almost all Markov chain models of queueing systems with finite source and/or finite capacity and first-come-first-served or head of the line nonpreemptive priority service discipline belongs to this special class.  相似文献   

3.
We introduce the concept of weakly distance-regular digraph and study some of its basic properties. In particular, the (standard) distance-regular digraphs, introduced by Damerell, turn out to be those weakly distance-regular digraphs which have a normal adjacency matrix. As happens in the case of distance-regular graphs, the study is greatly facilitated by a family of orthogonal polynomials called the distance polynomials. For instance, these polynomials are used to derive the spectrum of a weakly distance-regular digraph. Some examples of these digraphs, such as the butterfly and the cycle prefix digraph which are interesting for their applications, are analyzed in the light of the developed theory. Also, some new constructions involving the line digraph and other techniques are presented.  相似文献   

4.
We prove that every finite regular digraph has an arc-transitive covering digraph (whose arcs are equivalent under automorphisms) and every finite regular graph has a 2-arc-transitive covering graph. As a corollary, we sharpen C. D. Godsil's results on eigenvalues and minimum polynomials of vertex-transitive graphs and digraphs. Using Godsil's results, we prove, that given an integral matrix A there exists an arc-transitive digraph X such that the minimum polynomial of A divides that of X. It follows that there exist arc-transitive digraphs with nondiagonalizable adjacency matrices, answering a problem by P. J. Cameron. For symmetric matrices A, we construct a 2-arc-transitive graphs X.  相似文献   

5.
A word function is a function from the set of all words over a finite alphabet into the set of real numbers. In particular, when the blocks of a partition over the state set of a Markov chain are taken as the letters of the finite alphabet, and the function represents the probabilities that the chain will visit sequences of such blocks consecutively, then the function is a function of a Markov chain. It is known that (the rank of a function is defined in the text), a word function is of “finite rank” if and only if it is a function of a pseudo Markov chain (“pseudo” means here that the initial vector and the matrix representing the chain may have positive, negative, or zero values and are not necessarily stochastic). The aim of this note is to show that any function of a pseudo Markov chain can be represented as the difference of two functions of true Markov chains multiplied by a factor which grows exponentially with the length of the arguments (considered as words over a finite alphabet).  相似文献   

6.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

7.
8.
Bollobás and Scott proved that if the weighted outdegree of every vertex of an edge-weighted digraph is at least 1, then the digraph contains a (directed) path of weight at least 1. In this note we characterize the extremal weighted digraphs with no heavy paths. Our result extends a corresponding theorem of Bondy and Fan on weighted graphs. We also give examples to show that a result of Bondy and Fan on the existence of heavy paths connecting two given vertices in a 2-connected weighted graph does not extend to 2-connected weighted digraphs.  相似文献   

9.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

10.
We study the limit behaviour of a nonlinear differential equation whose solution is a superadditive generalisation of a stochastic matrix, prove convergence, and provide necessary and sufficient conditions for ergodicity. In the linear case, the solution of our differential equation is equal to the matrix exponential of an intensity matrix and can then be interpreted as the transition operator of a homogeneous continuous-time Markov chain. Similarly, in the generalised nonlinear case that we consider, the solution can be interpreted as the lower transition operator of a specific set of non-homogeneous continuous-time Markov chains, called an imprecise continuous-time Markov chain. In this context, our convergence result shows that for a fixed initial state, an imprecise continuous-time Markov chain always converges to a limiting distribution, and our ergodicity result provides a necessary and sufficient condition for this limiting distribution to be independent of the initial state.  相似文献   

11.
Markov chains are often used as mathematical models of natural phenomena, with transition probabilities defined in terms of parameters that are of interest in the scientific question at hand. Sensitivity analysis is an important way to quantify the effects of changes in these parameters on the behavior of the chain. Many properties of Markov chains can be written as simple matrix expressions, and hence matrix calculus is a powerful approach to sensitivity analysis. Using matrix calculus, we derive the sensitivity and elasticity of a variety of properties of absorbing and ergodic finite-state chains. For absorbing chains, we present the sensitivities of the moments of the number of visits to each transient state, the moments of the time to absorption, the mean number of states visited before absorption, the quasistationary distribution, and the probabilities of absorption in each of several absorbing states. For ergodic chains, we present the sensitivity of the stationary distribution, the mean first passage time matrix, the fundamental matrix, and the Kemeny constant. We include two examples of application of the results to demographic and ecological problems.  相似文献   

12.
This paper is devoted to perturbation analysis of denumerable Markov chains. Bounds are provided for the deviation between the stationary distribution of the perturbed and nominal chain, where the bounds are given by the weighted supremum norm. In addition, bounds for the perturbed stationary probabilities are established. Furthermore, bounds on the norm of the asymptotic decomposition of the perturbed stationary distribution are provided, where the bounds are expressed in terms of the norm of the ergodicity coefficient, or the norm of a special residual matrix. Refinements of our bounds for Doeblin Markov chains are considered as well. Our results are illustrated with a number of examples.  相似文献   

13.
A model is established to describe the structures of tilled soils using Markov chain theory. The effectiveness of the model in describing soil structures, and its accuracy when the model parameters are determined from limited field data is investigated by a consideration of variances of the transition probabilities and Markov chain state occurances in finite length chains. Criteria for correlation of soil structures at small horizontal and vertical displacements are derived, in order to establish distances at which soil structures become effectively independent. In this, a mathematical analysis is made of limiting covariances, generally applicable to the type of Markov chain used in describing these structures, in order to drastically reduce computing time in processing field data. Similarity coefficients are defined from the theory to measure similarity in different soil structures, and are compared in practice.  相似文献   

14.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

15.
It is known that the main difficulty in applying the Markovian analogue of Wald's Identity is the presence, in the Identity, of the last state variable before the random walk is terminated. In this paper we show that this difficulty can be overcome if the underlying Markov chain has a finite state space. The absorption probabilities thus obtained are used, by employing a duality argument, to derive time-dependent and limiting probabilities for the depletion process of a dam with Markovian inputs.The second problem that is considered here is that of a non-homogeneous but cyclic Markov chain. An analogue of Wald's Identity is obtained for this case, and is used to derive time- dependent and limiting probabilities for the depletion process with inputs forming a non- homogeneous (cyclic) Markov chain.  相似文献   

16.
There is a classical technique for determining the equilibrium probabilities ofM/G/1 type Markov chains. After transforming the equilibrium balance equations of the chain, one obtains an equivalent system of equations in analytic functions to be solved. This method requires finding all singularities of a given matrix function in the unit disk and then using them to obtain a set of linear equations in the finite number of unknown boundary probabilities. The remaining probabilities and other measures of interest are then computed from the boundary probabilities. Under certain technical assumptions, the linear independence of the resulting equations is established by a direct argument involving only elementary results from matrix theory and complex analysis. Simple conditions for the ergodicity and nonergodicity of the chain are also given.  相似文献   

17.
We consider infinite order chains whose transition probabilities depend on a finite suffix of the past. These suffixes are of variable length and the set of the lengths of all suffix is unbounded. We assume that the probability transitions for each of these suffixes are continuous with exponential decay rate. For these chains, we prove the weak consistency of a modification of Rissanen's algorithm Context which estimates the length of the suffix needed to predict the next symbol, given a finite sample. This generalizes to the unbounded case the original result proved for variable length Markov chains in the seminal paper Rissanen (1983). Our basic tool is the canonical Markov approximation which enables to approximate the chain of infinite order by a sequence of variable length Markov chains of increasing order. Our proof is constructive and we present an explicit decreasing upper bound for the probability of wrong estimation of the length of the current suffix.  相似文献   

18.
In this paper circuit chains of superior order are defined as multiple Markov chains for which transition probabilities are expressed in terms of the weights of a finite class of circuits in a finite set, in connection with kinetic properties along the circuits. Conversely, it is proved that if we join any finite doubly infinite strictly stationary Markov chain of order r for which transitions hold cyclically with a second chain with the same transitions for the inverse time-sense, then they may be represented as circuit chains of order r.  相似文献   

19.
莫晓云  杨向群 《数学学报》2018,61(1):143-154
本文用轨道分析方法研究批量Markov到达过程(BMAP),有别于研究BMAP常用的矩阵解析方法.通过BMAP的表现(D_k,k=0,1,2,…),得到BMAP的跳跃概率,证明了BMAP的相过程是时间齐次Markov链,求出了相过程的转移概率和密度矩阵.此外,给定一个带有限状态空间的Q过程J,其跳跃点的计数过程记为N,证明了Q过程J的伴随过程X*=(N,J)是一个MAP,求出了该MAP的转移概率和表现(D_0,D_1),它们是通过密度矩阵Q来表述的.  相似文献   

20.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号