首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

2.
In the paper we introduce stopping times for quantum Markov states. We study algebras and maps corresponding to stopping times, give a condition of strong Markov property and give classification of projections for the property of accessibility. Our main result is a new recurrence criterium in terms of stopping times (Theorem 1 and Corollary 2). As an application of the criterium we study how, in Section 6, the quantum Markov chain associated with the one-dimensional Heisenberg (usually non-Markovian) process, obtained from this quantum Markov chain by restriction to a diagonal subalgebra, is such that all its states are recurrent. We were not able to obtain this result from the known recurrence criteria of classical probability.Supported by GNAFA-CNR, Bando n. 211.01.25.  相似文献   

3.
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, …  相似文献   

4.
We prove that a quantum stochastic differential equation is the interaction representation of the Cauchy problem for the Schrödinger equation with Hamiltonian given by a certain operator restricted by a boundary condition. If the deficiency index of the boundary-value problem is trivial, then the corresponding quantum stochastic differential equation has a unique unitary solution. Therefore, by the deficiency index of a quantum stochastic differential equation we mean the deficiency index of the related symmetric boundary-value problem.In this paper, conditions sufficient for the essential self-adjointness of the symmetric boundary-value problem are obtained. These conditions are closely related to nonexplosion conditions for the pair of master Markov equations that we canonically assign to the quantum stochastic differential equation.  相似文献   

5.
齐次树上三次循环树指标马氏链的强极限定理   总被引:1,自引:0,他引:1  
首先给出齐次树上三次循环树指标马氏链的定义,利用构造鞅的方法,研究齐次树上三次循环树指标马氏链的强极限定理,并给出其状态及状态序偶发生频率的强大数定律.  相似文献   

6.
Summary The states of a Markov chain may be on an ordinal or a nominal scale. In this situation we need to assign appropriate scores to the states in order to study a given problem in detail. Using Fisher's criterion for assigning optimum scores to the marginals of anm×n contingency table, we shall obtain a system of optimum scores to assign to the states of a stationary Markov chain of order one.  相似文献   

7.
In this paper, we study a reflected Markov-modulated Brownian motion with a two sided reflection in which the drift, diffusion coefficient and the two boundaries are (jointly) modulated by a finite state space irreducible continuous time Markov chain. The goal is to compute the stationary distribution of this Markov process, which in addition to the complication of having a stochastic boundary can also include jumps at state change epochs of the underlying Markov chain because of the boundary changes. We give the general theory and then specialize to the case where the underlying Markov chain has two states.  相似文献   

8.
Focusing on stochastic dynamics involve continuous states as well as discrete events, this article investigates stochastic logistic model with regime switching modulated by a singular Markov chain involving a small parameter. This Markov chain undergoes weak and strong interactions, where the small parameter is used to reflect rapid rate of regime switching among each state class. Two-time-scale formulation is used to reduce the complexity. We obtain weak convergence of the underlying system so that the limit has much simpler structure. Then we utilize the structure of limit system as a bridge, to invest stochastic permanence of original system driving by a singular Markov chain with a large number of states. Sufficient conditions for stochastic permanence are obtained. A couple of examples and numerical simulations are given to illustrate our results.  相似文献   

9.
In this paper, we consider quantum multidimensional problems solvable by using the second quantization method. A multidimensional generalization of the Bogolyubov factorization formula, which is an important particular case of the Campbell-Baker-Hausdorff formula, is established. The inner product of multidimensional squeezed states is calculated explicitly; this relationship justifies a general construction of orthonormal systems generated by linear combinations of squeezed states. A correctly defined path integral representation is derived for solutions of the Cauchy problem for the Schrödinger equation describing the dynamics of a charged particle in the superposition of orthogonal constant (E,H)-fields and a periodic electric field. We show that the evolution of squeezed states runs over compact one-dimensional matrix-valued orbits of squeezed components of the solution, and the evolution of coherent shifts is a random Markov jump process which depends on the periodic component of the potential.  相似文献   

10.
In this paper, we extend the previous Markov-modulated reflected Brownian motion model discussed in [1] to a Markov-modulated reflected jump diffusion process, where the jump component is described as a Markov-modulated compound Poisson process. We compute the joint stationary distribution of the bivariate Markov jump process. An abstract example with two states is given to illustrate how the stationary equation described as a system of ordinary integro-differential equations is solved by choosing appropriate boundary conditions. As a special case, we also give the sationary distribution for this Markov jump process but without Markovian regime-switching.  相似文献   

11.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

12.
The well-known Hammersley–Clifford Theorem states (under certain conditions) that any Markov random field is a Gibbs state for a nearest neighbor interaction. In this paper we study Markov random fields for which the proof of the Hammersley–Clifford Theorem does not apply. Following Petersen and Schmidt we utilize the formalism of cocycles for the homoclinic equivalence relation and introduce “Markov cocycles”, reparametrizations of Markov specifications. The main part of this paper exploits this to deduce the conclusion of the Hammersley–Clifford Theorem for a family of Markov random fields which are outside the theorem’s purview where the underlying graph is Zd. This family includes all Markov random fields whose support is the d-dimensional “3-colored chessboard”. On the other extreme, we construct a family of shift-invariant Markov random fields which are not given by any finite range shift-invariant interaction.  相似文献   

13.
We study the necessary and sufficient conditions for a finite ergodic Markov chain to converge in a finite number of transitions to its stationary distribution. Using this result, we describe the class of Markov chains which attain the stationary distribution in a finite number of steps, independent of the initial distribution. We then exhibit a queueing model that has a Markov chain embedded at the points of regeneration that falls within this class. Finally, we examine the class of continuous time Markov processes whose embedded Markov chain possesses the property of rapid convergence, and find that, in the case where the distribution of sojourn times is independent of the state, we can compute the distribution of the system at time t in the form of a simple closed expression.  相似文献   

14.
15.
We compare different selection criteria to choose the number of latent states of a multivariate latent Markov model for longitudinal data. This model is based on an underlying Markov chain to represent the evolution of a latent characteristic of a group of individuals over time. Then, the response variables observed at different occasions are assumed to be conditionally independent given this chain. Maximum likelihood estimation of the model is carried out through an Expectation–Maximization algorithm based on forward–backward recursions which are well known in the hidden Markov literature for time series. The selection criteria we consider are based on penalized versions of the maximum log-likelihood or on the posterior probabilities of belonging to each latent state, that is, the conditional probability of the latent state given the observed data. Among the latter criteria, we propose an appropriate entropy measure tailored for the latent Markov models. We show the results of a Monte Carlo simulation study aimed at comparing the performance of the above states selection criteria on the basis of a wide set of model specifications.  相似文献   

16.
We prove necessary and sufficient conditions for the transience of the non-zero states in a non-homogeneous, continuous time Markov branching process. The result is obtained by passing from results about the discrete time skeleton of the continuous time chain to the continuous time chain itself. An alternative proof of a result for continuous time Markov branching processes in random environments is then given, showing that earlier moment conditions were not necessary.  相似文献   

17.
Li  Quan-Lin  Zhao  Yiqiang Q. 《Queueing Systems》2004,47(1-2):5-43
In this paper, we consider a MAP/G/1 queue with MAP arrivals of negative customers, where there are two types of service times and two classes of removal rules: the RCA and RCH, as introduced in section 2. We provide an approach for analyzing the system. This approach is based on the classical supplementary variable method, combined with the matrix-analytic method and the censoring technique. By using this approach, we are able to relate the boundary conditions of the system of differential equations to a Markov chain of GI/G/1 type or a Markov renewal process of GI/G/1 type. This leads to a solution of the boundary equations, which is crucial for solving the system of differential equations. We also provide expressions for the distributions of stationary queue length and virtual sojourn time, and the Laplace transform of the busy period. Moreover, we provide an analysis for the asymptotics of the stationary queue length of the MAP/G/1 queues with and without negative customers.  相似文献   

18.
The viscous quantum hydrodynamic model derived for semiconductor simulation is studied in this paper. The principal part of the vQHD system constitutes a parameter‐elliptic operator provided that boundary conditions satisfying the Shapiro–Lopatinskii criterion are specified. We classify admissible boundary conditions and show that this principal part generates an analytic semigroup, from which we then obtain the local in time well‐posedness. Furthermore, the exponential stability of zero current and large current steady states is proved, without any kind of subsonic condition. The decay rate is given explicitly. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
In this lecture we present a brief outline of boson Fock space stochastic calculus based on the creation, conservation and annihilation operators of free field theory, as given in the 1984 paper of Hudson and Parthasarathy [9]. We show how a part of this architecture yields Gaussian fields stationary under a group action. Then we introduce the notion of semigroups of quasifree completely positive maps on the algebra of all bounded operators in the boson Fock space Γ(? n ) over ? n . These semigroups are not strongly continuous but their preduals map Gaussian states to Gaussian states. They were first introduced and their generators were shown to be of the Lindblad type by Vanheuverzwijn [19]. They were recently investigated in the context of quantum information theory by Heinosaari et al. [7]. Here we present the exact noisy Schrödinger equation which dilates such a semigroup to a quantum Gaussian Markov process.  相似文献   

20.
A discrete-time Markov chain is defined on the real line as follows: When it is to the left (respectively, right) of the “boundary”, the chain performs a random walk jump with distributionU (respectively,V). The “boundary” is a point moving at a constant speed γ. We examine certain long-term properties and their dependence on γ. For example, if bothU andV drift away from the boundary, then the chain will eventually spend all of its time on one side of the boundary; we show that in the integer-valued case, the probability of ending up on the left side, viewed as a function of γ, is typically discontinuous at every rational number in a certain interval and continuous everywhere else. Another result is that ifU andV are integer-valued and drift toward the boundary, then when viewed from the moving boundary, the chain has a unique invariant distribution, which is absolutely continuous whenever γ is irrational.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号