首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

2.
A general notion of positive dependence among successive observations in a finite-state stationary process is studied, with particular attention to the case of a stationary ergodic Markov chain. This dependence condition can be expressed as a positivity condition on the joint probability matrices of pairs of observations. Some useful conditions equivalent to positive dependence are obtained for reversible chains, but shown not to be equivalent for nonreversible chains.  相似文献   

3.
Conditions are obtained, as well as quantitative estimates, of Markov chains with a common set of states that are uniformly continuous in time. Within the framework of the method under consideration, it is possible to show that such chains can be finitely approximated.Translated from Problemy Ustoichivosti Stokhasticheskikh Modelei — Trudy Seminara, pp. 4–12, 1980.  相似文献   

4.
The aim of this paper is to examine multiple Markov dependence for the discrete as well as for the continuous parameter case. In both cases the Markov property with arbitrary parameter values is investigated and it is shown that it leads to the degeneration of the multiple Markov dependence to the simple one.  相似文献   

5.
The following modification of a general state space discrete-time Markov chain is considered: certain transitions are supposed “forbidden” and the chain evolves until there is such a transition. At this instant the value of the chain is “replaced” according to a given rule, and, starting from the new value, the chain evolves normally until there is a forbidden transition again; the cycle is then repeated. The relationship of this modified process to the original one is studied in general terms, with particular emphasis being given to invariant measures. Examples are given which illustrate the results obtained.  相似文献   

6.
7.
In this paper we introduce a model which provides a new approach to the phenomenon of stochastic resonance. It is based on the study of the properties of the stationary distribution of the underlying stochastic process. We derive the formula for the spectral power aplification coefficient, study its asymptotic properties and dependence on parameters.  相似文献   

8.
We explore the use of the concept of lumpability of continuous time discrete state space Markov processes in the context of risk management and propose an approximate lumpability procedure that may be useful when exact lumpability does not hold.  相似文献   

9.
It is known that each Markov chain has associated with it a polytope and a family of Markov measures indexed by the interior points of the polytope. Measure-preserving factor maps between Markov chains must preserve the associated families. In the present paper, we augment this structure by identifying measures corresponding to points on the boundary of the polytope. These measures are also preserved by factor maps. We examine the data they provide and give examples to illustrate the use of this data in ruling out the existence of factor maps between Markov chains. E. Cawley was partially supported by the Modern Analysis joint NSF grant in Berkeley. S. Tuncel was partially supported by NSF Grant DMS-9303240.  相似文献   

10.
It is shown that the combinatorics of commutation relations is well suited for analyzing the convergence rate of certain Markov chains. Examples studied include random walk on irreducible representations, a local random walk on partitions whose stationary distribution is the Ewens distribution, and some birth–death chains.  相似文献   

11.
In principle it is possible to characterize the long run behavior of any evolutionary game by finding an analytical expression for its limit probability distribution. However, it is cumbersome to do so when the state space is large and the rate of mutation is significant. This paper gives upper and lower bounds for the limit distribution, which are easy to compute. The bounds are expressed in terms of the maximal and minimal row sums of parts of the transition matrix.  相似文献   

12.
13.
This paper discusses finite-dimensional optimal filters for partially observed Markov chains. A model for a system containing a finite number of components where each component behaves like an independent finite state continuous-time Markov chain is considered. Using measure change techniques various estimators are derived.  相似文献   

14.
15.
16.
We consider convergence of Markov chains with uncertain parameters, known as imprecise Markov chains, which contain an absorbing state. We prove that under conditioning on non-absorption the imprecise conditional probabilities converge independently of the initial imprecise probability distribution if some regularity conditions are assumed. This is a generalisation of a known result from the classical theory of Markov chains by Darroch and Seneta [6].  相似文献   

17.
Persi Diaconis and Phil Hanlon in their interesting paper(4) give the rates of convergence of some Metropolis Markov chains on the cubeZ d (2). Markov chains on finite groups that are actually random walks are easier to analyze because the machinery of harmonic analysis is available. Unfortunately, Metropolis Markov chains are, in general, not random walks on group structure. In attempting to understand Diaconis and Hanlon's work, the authors were led to the idea of a hypergroup deformation of a finite groupG, i.e., a continuous family of hypergroups whose underlying space isG and whose structure is naturally related to that ofG. Such a deformation is provided forZ d (2), and it is shown that the Metropolis Markov chains studied by Diaconis and Hanlon can be viewed as random walks on the deformation. A direct application of the Diaconis-Shahshahani Upper Bound Lemma, which applies to random walks on hypergroups, is used to obtain the rate of convergence of the Metropolis chains starting at any point. When the Markov chains start at 0, a result in Diaconis and Hanlon(4) is obtained with exactly the same rate of convergence. These results are extended toZ d (3).Research supported in part by the Office of Research and Sponsored Programs, University of Oregon.  相似文献   

18.
19.
The isomorphism theorem of Dynkin is definitely an important tool to investigate the problems raised in terms of local times of Markov processes. This theorem concerns continuous time Markov processes. We give here an equivalent version for Markov chains.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号