首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a state reduction based algorithm for computing the steady state probability vectors of the embedded Markov chains ofM/G/1 type. The algorithm is based on the use of the notion of a state reduction box for streamlining compution. The computational details are linked directly to the theoretical results developed recently by Grassmann and Heyman [6]. Exploiting these connections and a method given in Neuts [18] for finding theG matrix forPH/PH/1 queues, we also propose an hybrid approach for solvingPH/PH/1 queues. Using several numerical examples, we report our computational experiences and present some observations about the relative merits of these approaches.  相似文献   

2.
Efficient algorithms for finding steady state probabilities are presented and compared with the Gaussian elimination method for two special classes of finite state Markov chains. One class has block matrix steps and a possible jump of up to k block steps, and the other is a generalization of the class considered by Shanthikumar and Sargent where each element is a matrix.  相似文献   

3.
4.
We derive an expression for the expected time for a pattern to appear in higher-order Markov chains with and without a starting sequence. This yields a result for directly calculating, the first time one of a collection of patterns appears, in addition to the probability, for each pattern, that it is the first to appear.  相似文献   

5.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

6.
7.
Let C be a collection of particles, each of which is independently undergoing the same Markov chain, and let d be a metric on the state space. Then, using transition probabilities, for distinct p, q in C, any time t and real x, we can calculate F pq (t) (x) = Pr [d (p,q)t]. For each time t 0, the collection C is shown to be a probabilistic metric space under the triangle function . In this paper we study the structure and limiting behavior of PM spaces so constructed. We show that whenever the transition probabilities have non-degenerate limits then the limit of the family of PM spaces exists and is a PM space under the same triangle function. For an irreducible, aperiodic, positive recurrent Markov chain, the limiting PM space is equilateral. For an irreducible, positive recurrent Markov chain with period p, the limiting PM space has at most only [p/2]+2 distinct distance distribution functions. Finally, we exhibit a class of Markov chains in which all of the states are transient, so that P ij(t)0 for all states i, j, but for which the {F pq tt } all have non-trivial limits and hence a non-trivial limiting PM space does exist.  相似文献   

8.
Let {X n } n ≥0 be a Markov chain with stationary distributionf(x)ν(dx), ν being a σ-finite measure onE⊂R d . Under strict stationarity and mixing conditions we obtain the consistency and asymptotic normality for a general class of kernel estimates off(·). When the assumption of stationarity is dropped these results are extended to geometrically ergodic chains. Partially supported by CAPES. Partially supported by CNPq, PROCAD/CAPES, PRONEX/FAPDF and FINATEC/UnB.  相似文献   

9.
The aim of this paper is to solve the basic stochastic shortest-path problem (SSPP) for Markov chains (MCs) with countable state space and then apply the results to a class of nearest-neighbor MCs on the lattice state space $\mathbb Z \times \mathbb Z $ whose only moves are one step up, down, to the right or to the left. The objective is to control the MC, by suppressing certain moves, so as to minimize the expected time to reach a certain given target state. We characterize the optimal policies for SSPPs for general MCs with countably infinite state space, the main tool being a verification theorem for the value function, and give an algorithmic construction. Then we apply the results to a large class of examples: nearest-neighbor MCs for which the state space $\mathbb Z \times \mathbb Z $ is split by a vertical line into two regions inside which the transition probabilities are the same for every state. We give a necessary and sufficient condition for the so-called distance-diminishing policy to be optimal. For the general case in which this condition does not hold we develop an explicit finite construction of an optimal policy.  相似文献   

10.
Conditions are obtained, as well as quantitative estimates, of Markov chains with a common set of states that are uniformly continuous in time. Within the framework of the method under consideration, it is possible to show that such chains can be finitely approximated.Translated from Problemy Ustoichivosti Stokhasticheskikh Modelei — Trudy Seminara, pp. 4–12, 1980.  相似文献   

11.
The following modification of a general state space discrete-time Markov chain is considered: certain transitions are supposed “forbidden” and the chain evolves until there is such a transition. At this instant the value of the chain is “replaced” according to a given rule, and, starting from the new value, the chain evolves normally until there is a forbidden transition again; the cycle is then repeated. The relationship of this modified process to the original one is studied in general terms, with particular emphasis being given to invariant measures. Examples are given which illustrate the results obtained.  相似文献   

12.
13.
The aim of this paper is to examine multiple Markov dependence for the discrete as well as for the continuous parameter case. In both cases the Markov property with arbitrary parameter values is investigated and it is shown that it leads to the degeneration of the multiple Markov dependence to the simple one.  相似文献   

14.
15.
16.
It is known that each Markov chain has associated with it a polytope and a family of Markov measures indexed by the interior points of the polytope. Measure-preserving factor maps between Markov chains must preserve the associated families. In the present paper, we augment this structure by identifying measures corresponding to points on the boundary of the polytope. These measures are also preserved by factor maps. We examine the data they provide and give examples to illustrate the use of this data in ruling out the existence of factor maps between Markov chains. E. Cawley was partially supported by the Modern Analysis joint NSF grant in Berkeley. S. Tuncel was partially supported by NSF Grant DMS-9303240.  相似文献   

17.
It is shown that the combinatorics of commutation relations is well suited for analyzing the convergence rate of certain Markov chains. Examples studied include random walk on irreducible representations, a local random walk on partitions whose stationary distribution is the Ewens distribution, and some birth–death chains.  相似文献   

18.
In principle it is possible to characterize the long run behavior of any evolutionary game by finding an analytical expression for its limit probability distribution. However, it is cumbersome to do so when the state space is large and the rate of mutation is significant. This paper gives upper and lower bounds for the limit distribution, which are easy to compute. The bounds are expressed in terms of the maximal and minimal row sums of parts of the transition matrix.  相似文献   

19.
20.
This paper discusses finite-dimensional optimal filters for partially observed Markov chains. A model for a system containing a finite number of components where each component behaves like an independent finite state continuous-time Markov chain is considered. Using measure change techniques various estimators are derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号