首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Motivated by applications in Markov chain Monte Carlo, we discuss what it means for one Markov chain to be an approximation to another. Specifically included in that discussion are situations in which a Markov chain with continuous state space is approximated by one with finite state space. A simple sufficient condition for close approximation is derived, which indicates the existence of three distinct approximation regimes. Counterexamples are presented to show that these regimes are real and not artifacts of the proof technique. An application to the ``ball walk' of Lovász and Simonovits is provided as an illustrative example. Partially supported by EPSRC grant ``Sharper Analysis of Randomized Algorithms: a Computational Approach' and the IST Programme of the EU under contract IST-1999-14036 (RAND-APX). The work described here was partially carried out while the author was visiting the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK. Postal address: School of Informatics, University of Edinburgh, The King's Buildings, Edinburgh EH9 3JZ, United Kingdom.  相似文献   

3.
The Efficient Determination Criterion (EDC) generalizes the AIC and BIC criteria and provides a class of consistent estimators for the order of a Markov chain with finite state space. In this note, we derive rates of convergence for the EDC estimates. *Partially supported by CNPq, CAPES/PROCAD, FAPDF/PRONEX, FINATEC and FUNPE/UnB. **Partially supported by CAPES.  相似文献   

4.
The topological Markov chain or the subshift of finite type is a restriction of the shift on an invariant subset determined by a 0, 1-matrix, which has some important applications in the theory of dynamical systems. In this paper, the topological Markov chain has been discussed. First, we introduce a structure of the directed gragh on a 0, 1-matrix, and then by using it as a tool, we give some equivalent conditions with respect to the relationship among topological entropy, chaos, the nonwandering set, the set of periodic points and the 0, 1-matrix involved. This work is supported in part by the Foundation of Advanced Research Centre, Zhongshan University.  相似文献   

5.
6.
7.
An equivalent representation of the Spearman footrule is considered and a characterization in terms of a Markov chain is established. A martingale approach is thereby incorporated in the study of the asymptotic normality of the statistics.  相似文献   

8.
To each function ϕ˜(ω) mapping the upper complex half plane ?+ into itself such that the coefficient of ω in the Nevanlinna integral representation is one, we associate the kernel p(y, dx) of a Markov chain on ℝ by
The aim of this paper is to study this chain in terms of the measure μ appearing in the Nevanlinna representation of ϕ˜(ω). We prove in particular three results. If x 2 is integrable by μ, a law of large numbers is available. If μ is singular, i.e. if ϕ˜ is an inner function, then the operator P on L (ℝ) for the Lebesgue measure is the adjoint of T defined on L 1(ℝ) by T(f)(ω) = f(ϕ(ω)), where ϕ is the restriction of ϕ˜ to ℝ. Finally, if μ is both singular and with compact support, we give a necessary and sufficient condition for recurrence of the chain. Received: 24 April 1998 / Revised version: 13 March 2000 / Published online: 20 October 2000  相似文献   

9.
A batch Markov arrival process (BMAP) X* = (N, J) is a 2-dimensional Markov process with two components, one is the counting process N and the other one is the phase process J. It is proved that the phase process is a time-homogeneous Markov chain with a finite state-space, or for short, Markov chain. In this paper, a new and inverse problem is proposed firstly: given a Markov chain J, can we deploy a process N such that the 2-dimensional process X* = (N, J) is a BMAP? The process X* = (N, J) is said to be an adjoining BMAP for the Markov chain J. For a given Markov chain the adjoining processes exist and they are not unique. Two kinds of adjoining BMAPs have been constructed. One is the BMAPs with fixed constant batches, the other one is the BMAPs with independent and identically distributed (i.i.d) random batches. The method we used in this paper is not the usual matrix-analytic method of studying BMAP, it is a path-analytic method. We constructed directly sample paths of adjoining BMAPs. The expressions of characteristic (D k , k = 0, 1, 2 · · ·) and transition probabilities of the adjoining BMAP are obtained by the density matrix Q of the given Markov chain J. Moreover, we obtained two frontal Theorems. We present these expressions in the first time.  相似文献   

10.
11.
In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.  相似文献   

12.
In this note we study the connection between the Martin boundaries of a non-recurrent Markov chain and its parts.Translated from Matematicheskie Zametki, Vol. 3, No. 1, pp. 93–102, January, 1968.In conclusion, the author would like to thank M. G. Shur for his interest in this work.  相似文献   

13.
14.
1.IntroductionInreliabilitytheory,inordertocalculatethefailurefrequencyofarepairablesystem,Shily]firstintroducedandstudiedthetransitionfrequencybetweentwodisjointstatesetsforafiniteMarkovchainandavectorMarkovprocesswithfinitediscretestatespaceandobtainedageneralformulaoftransitionfrequency.Then,ontheconditionthatthegeneratormatrixofMarkovchainisuniformlybounded,Shi[8'9]againprovedthetransitionfrequencyformulaandobtainedthreeotherusefulformulas.Obviously,thepoint(orcalledcounting)processofsta…  相似文献   

15.
The optimal-stopping problem in a partially observable Markov chain is considered, and this is formulated as a Markov decision process. We treat a multiple stopping problem in this paper. Unlike the classical stopping problem, the current state of the chain is not known directly. Information about the current state is always available from an information process. Several properties about the value and the optimal policy are given. For example, if we add another stop action to thek-stop problem, the increment of the value is decreasing ink.The author wishes to thank Professor M. Sakaguchi of Osaka University for his encouragement and guidance. He also thanks the referees for their careful readings and helpful comments.  相似文献   

16.
This paper develops an efficient LP algorithm for solving single chain undiscounted Markov decision problems. The algorithm imposes, in the framework of the simplex method, the multiple choice constraints that exactly one basic variable be chosen from each Markov state. It is proved that the algorithm converges to an optimal solution in a finite number of steps.  相似文献   

17.
Let Nn denote the quotient poset of the Boolean lattice, Bn, under the relation equivalence under rotation. Griggs, Killian, and Savage proved that Np is a symmetric chain order for prime p. In this paper, we settle the question posed in that paper, namely whether Nn is a symmetric chain order for all n. This paper provides an algorithm that produces a symmetric chain decomposition (or SCD). We accomplish this by modifying bracketing from Greene and Kleitman. This allows us to take appropriate “middles” of certain chains from the Greene-Kleitman SCD for Bn. We also prove additional properties of the resulting SCD and show that this settles a related conjecture.  相似文献   

18.
In this paper, first we introduce a new tensor product for a transition probability tensor originating from a higher‐order Markov chain. Subsequently, some properties of the new tensor product are explained, and its relationship with the stationary probability vector is studied. Also, similarity between results obtained by this new product and the first‐order case is shown. Furthermore, we prove the convergence of a transition probability tensor to the stationary probability vector. Finally, we show how to achieve a stationary probability vector with some numerical examples and make some comparison between the proposed method and another existing method for obtaining stationary probability vectors. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
This paper develops a rare event simulation algorithm for a discrete-time Markov chain in the first orthant. The algorithm gives a very good estimate of the stationary distribution along one of the axes and it is shown to be efficient. A key idea is to study an associated time reversed Markov chain that starts at the rare event. We will apply the algorithm to a Markov chain related to a Jackson network with two stations.  相似文献   

20.
The derivation of the expected time to coupling in a Markov chain and its relation to the expected time to mixing (as introduced by the author [J.J. Hunter, Mixing times with applications to perturbed Markov chains, Linear Algebra Appl. 417 (2006) 108-123] are explored. The two-state cases and three-state cases are examined in detail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号