共查询到20条相似文献,搜索用时 0 毫秒
1.
M. G. Shur 《Mathematical Notes》2010,87(1-2):271-280
The proposed definition of convergence parameter R(W) corresponding to a Markov chain X with a measurable state space (E,?) and any nonempty setW of bounded below measurable functions f: E → ? is wider than the well-known definition of convergence parameter R in the sense of Tweedie or Nummelin. Very often, R(W) < ∞, and there exists a set playing the role of the absorbing set inNummelin’s definition ofR. Special attention is paid to the case in whichE is locally compact, X is a Feller chain on E, and W coincides with the family ? 0 + of all compactly supported continuous functions f ≥ 0 (f ? 0). In particular, certain conditions for R(? 0 + )?1 to coincide with the norm of an appropriate modification of the chain transition operator are found. 相似文献
2.
Zhou Zuoling 《数学学报(英文版)》1988,4(4):330-337
The topological Markov chain or the subshift of finite type is a restriction of the shift on an invariant subset determined
by a 0, 1-matrix, which has some important applications in the theory of dynamical systems.
In this paper, the topological Markov chain has been discussed. First, we introduce a structure of the directed gragh on a
0, 1-matrix, and then by using it as a tool, we give some equivalent conditions with respect to the relationship among topological
entropy, chaos, the nonwandering set, the set of periodic points and the 0, 1-matrix involved.
This work is supported in part by the Foundation of Advanced Research Centre, Zhongshan University. 相似文献
3.
An equivalent representation of the Spearman footrule is considered and a characterization in terms of a Markov chain is established. A martingale approach is thereby incorporated in the study of the asymptotic normality of the statistics. 相似文献
4.
5.
In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property. 相似文献
6.
Yves Le Jan 《Advances in Mathematics》1981,42(2):136-142
7.
8.
A batch Markov arrival process (BMAP) X* = (N, J) is a 2-dimensional Markov process with two components, one is the counting process N and the other one is the phase process J. It is proved that the phase process is a time-homogeneous Markov chain with a finite state-space, or for short, Markov chain. In this paper, a new and inverse problem is proposed firstly: given a Markov chain J, can we deploy a process N such that the 2-dimensional process X* = (N, J) is a BMAP? The process X* = (N, J) is said to be an adjoining BMAP for the Markov chain J. For a given Markov chain the adjoining processes exist and they are not unique. Two kinds of adjoining BMAPs have been constructed. One is the BMAPs with fixed constant batches, the other one is the BMAPs with independent and identically distributed (i.i.d) random batches. The method we used in this paper is not the usual matrix-analytic method of studying BMAP, it is a path-analytic method. We constructed directly sample paths of adjoining BMAPs. The expressions of characteristic (D k , k = 0, 1, 2 · · ·) and transition probabilities of the adjoining BMAP are obtained by the density matrix Q of the given Markov chain J. Moreover, we obtained two frontal Theorems. We present these expressions in the first time. 相似文献
9.
D. I. Shparo 《Mathematical Notes》1968,3(1):55-60
In this note we study the connection between the Martin boundaries of a non-recurrent Markov chain and its parts.Translated from Matematicheskie Zametki, Vol. 3, No. 1, pp. 93–102, January, 1968.In conclusion, the author would like to thank M. G. Shur for his interest in this work. 相似文献
10.
This paper discusses the asymptotic behaviors of the longest run on a countable state Markov chain.Let {Xa} a∈Z + be a stationary strongly ergodic reversible Markov chain on countablestate space S = {1,2,...}.Let TS be an arbitrary finite subset of S.Denote by Ln the length of the longest run of consecutive i's for i∈T,that occurs in the sequence X1,...,Xn.In this paper,we obtain a limit law and a week version of an Erds-Rényi type law for Ln.A large deviation result of Ln is also discussed. 相似文献
11.
12.
《Stochastic Processes and their Applications》2020,130(12):7098-7130
Permanental sequences with non-symmetric kernels that are generalization of the potentials of a Markov chain with state space and a single instantaneous state that was introduced by Kolmogorov, are studied. Depending on a parameter in the kernels we obtain an exact rate of divergence of the sequence at 0, an exact local modulus of continuity of the sequence at 0, or a precise bounded discontinuity for the sequence at 0. 相似文献
13.
Anders Grimvall 《Stochastic Processes and their Applications》1973,1(4):335-368
Starting from a real-valued Markov chain X0,X1,…,Xn with stationary transition probabilities, a random element {Y(t);t[0, 1]} of the function space D[0, 1] is constructed by letting Y(k/n)=Xk, k= 0,1,…,n, and assuming Y (t) constant in between. Sample tightness criteria for sequences {Y(t);t[0,1]};n of such random elements in D[0, 1] are then given in terms of the one-step transition probabilities of the underlying Markov chains. Applications are made to Galton-Watson branching processes. 相似文献
14.
Theodore J. Sheskin 《International Journal of Mathematical Education in Science & Technology》2013,44(5):799-805
We present a new matrix construction algorithm for computing absorption probabilities for a finite, reducible Markov chain. The construction algorithm contains two steps: matrix augmentation and matrix reduction. The algorithm requires more memory and less execution time than the calculation of absorption probabilities by the LU decomposition. We apply the algorithm to a Markovian model of a production line. 相似文献
15.
This paper develops a rare event simulation algorithm for a discrete-time Markov chain in the first orthant. The algorithm gives a very good estimate of the stationary distribution along one of the axes and it is shown to be efficient. A key idea is to study an associated time reversed Markov chain that starts at the rare event. We will apply the algorithm to a Markov chain related to a Jackson network with two stations. 相似文献
16.
1.IntroductionInreliabilitytheory,inordertocalculatethefailurefrequencyofarepairablesystem,Shily]firstintroducedandstudiedthetransitionfrequencybetweentwodisjointstatesetsforafiniteMarkovchainandavectorMarkovprocesswithfinitediscretestatespaceandobtainedageneralformulaoftransitionfrequency.Then,ontheconditionthatthegeneratormatrixofMarkovchainisuniformlybounded,Shi[8'9]againprovedthetransitionfrequencyformulaandobtainedthreeotherusefulformulas.Obviously,thepoint(orcalledcounting)processofsta… 相似文献
17.
T. Nakai 《Journal of Optimization Theory and Applications》1985,45(3):425-442
The optimal-stopping problem in a partially observable Markov chain is considered, and this is formulated as a Markov decision process. We treat a multiple stopping problem in this paper. Unlike the classical stopping problem, the current state of the chain is not known directly. Information about the current state is always available from an information process. Several properties about the value and the optimal policy are given. For example, if we add another stop action to thek-stop problem, the increment of the value is decreasing ink.The author wishes to thank Professor M. Sakaguchi of Osaka University for his encouragement and guidance. He also thanks the referees for their careful readings and helpful comments. 相似文献
18.
We propose a new method for the analysis of lot-per-lot inventory systems with backorders under rationing. We introduce an embedded Markov chain that approximates the state-transition probabilities. We provide a recursive procedure for generating these probabilities and obtain the steady-state distribution. 相似文献
19.
Zhou Zouling 《数学学报(英文版)》1993,9(1):1-7
In this paper, we have discussed the one sided topological Markov chain and proved the following conditions to be equivalent:
1. topologically strongly mixing, 2. topologically weakly mixing, 3. topological transitivity and the existence of two periods
which are co-prime. As a consequence, we have come to the conclusion that mixing implies positive entropy but the converse
is not true.
Supported partly by NSCF and NECF of China. 相似文献
20.
In this paper the limit behavior of random mappings with n vertices is investigated. We first compute the asymptotic probability that a fixed class of finite non-intersected subsets of vertices are located in different components and use this result to construct a scheme of allocating particles with a related Markov chain. We then prove that the limit behavior of random mappings is actually embedded in such a scheme in a certain way. As an application, we shall give the asymptotic moments of the size of the largest component. 相似文献