首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The state 0 of a birth and death process with state space E = {0, 1, 2,....} is a barrier which can be classified into four kinds: reflection, absorption, leaping reflection, quasi-leaping reflection. For the first, second and fourth barriers, the characteristic numbers of different forms have been introduced. In this paper unified characteristic numbers for birth and death processes with barriers were introduced, the related equations were solved and the solutions were expressed by unified characteristic numbers. This paper concerns work solving probability construction problem of birth and death processes with leaping reflection barrier and quasi-leaping reflection barrier.  相似文献   

2.
We study the problem of stationarity and ergodicity for autoregressive multinomial logistic time series models which possibly include a latent process and are defined by a GARCH-type recursive equation. We improve considerably upon the existing conditions about stationarity and ergodicity of those models. Proofs are based on theory developed for chains with complete connections. A useful coupling technique is employed for studying ergodicity of infinite order finite-state stochastic processes which generalize finite-state Markov chains. Furthermore, for the case of finite order Markov chains, we discuss ergodicity properties of a model which includes strongly exogenous but not necessarily bounded covariates.  相似文献   

3.
We consider a certain class of nonsymmetric Markov chains and obtain heat kernel bounds and parabolic Harnack inequalities. Using the heat kernel estimates, we establish a sufficient condition for the family of Markov chains to converge to nonsymmetric diffusions. As an application, we approximate nonsymmetric diffusions in divergence form with bounded coefficients by nonsymmetric Markov chains. This extends the results by Stroock and Zheng to the nonsymmetric divergence forms. © 2012 Wiley Periodicals, Inc.  相似文献   

4.
In previous work, the embedding problem is examined within the entire set of discrete-time Markov chains. However, for several phenomena, the states of a Markov model are ordered categories and the transition matrix is state-wise monotone. The present paper investigates the embedding problem for the specific subset of state-wise monotone Markov chains. We prove necessary conditions on the transition matrix of a discrete-time Markov chain with ordered states to be embeddable in a state-wise monotone Markov chain regarding time-intervals with length 0.5: A transition matrix with a square root within the set of state-wise monotone matrices has a trace at least equal to 1.  相似文献   

5.
《Optimization》2012,61(6):853-857
The paper extends the concept of decision and forecast horizons from classes of stationary to classes of nonstationary Markov decision problems. The horizons are explicitly obtained for a family of inventory models. The family is indexed by nonstationary Markov chains and deterministic sequences. For the proof only reference to simlier work on the stationary case is made.  相似文献   

6.
The problem of multivariate information analysis is considered. First, the interaction information in each dimension is defined analogously according to McGill [4] and then applied to Markov chains. The property of interaction information zero deeply relates to a certain class of weakly dependent random variables. For homogeneous, recurrent Markov chains with m states, mn ≥3, the zero criterion of n-dimensional interaction information is achieved only by (n ? 2)-dependent Markov chains, which are generated by some nilpotent matrices. Further for Gaussian Markov chains, it gives the decomposition rule of the variables into mutually correlated subchains.  相似文献   

7.
It is known that each Markov chain has associated with it a polytope and a family of Markov measures indexed by the interior points of the polytope. Measure-preserving factor maps between Markov chains must preserve the associated families. In the present paper, we augment this structure by identifying measures corresponding to points on the boundary of the polytope. These measures are also preserved by factor maps. We examine the data they provide and give examples to illustrate the use of this data in ruling out the existence of factor maps between Markov chains. E. Cawley was partially supported by the Modern Analysis joint NSF grant in Berkeley. S. Tuncel was partially supported by NSF Grant DMS-9303240.  相似文献   

8.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

9.
The investigation of Mach reflection formed after the impingement of a weak plane shock wave on a wedge with shock Mach number Ms near 1, is still an open problem[12]. It's difficult for shock tube experiments with interferometer to detect contact discontinuities if it is too weak; also difficult to catch with due accuracy the transition condition between Mach reflection and regular reflection. The interest to this phenomenon is continuing, especially for weak shocks, because there was systematic discrepancy between simplified three shock theory of von Neumann [8] and shock tube results [15] which was named by G. Birkhoff as “von Neumann Paradox on three shock theory” [18].In 1972, K.O.Friedrichs called for more computational efforts on this problem. Recently it is known that for weak impinging shocks it's still difficult to get contact discontinuities and curved Mach stem with satisfactory accuracy. Recent numerical computation sometimes even fails to show reflected shock wave[6]. These explain why von Neumann paradox of the three shock theory in case of weak discontinuities is still a problem of interesting [9,12,14]. In this paper, on one hand, we investigate the numerical methods for Euler's equation for compressible inviscid flow, aiming at improving the computation of contact discontinuities, on the other hand, a methodology is suggested to correctly plot flow data from the massive information in storage. On this basis, all the reflected shock wave , contact discontinuities and the curved Mach stem are determined. We get Mach reflection under the condition when over-simplified shock theory predicts no such configuration[5].  相似文献   

10.
In this paper, we study a class of stochastic processes, called evolving network Markov chains, in evolving networks. Our approach is to transform the degree distribution problem of an evolving network to a corresponding problem of evolving network Markov chains. We investigate the evolving network Markov chains, thereby obtaining some exact formulas as well as a precise criterion for determining whether the steady degree distribution of the evolving network is a power-law or not. With this new method, we finally obtain a rigorous, exact and unified solution of the steady degree distribution of the evolving network.  相似文献   

11.
We consider the M/G/1 and GI/M/1 types of Markov chains for which their one step transitions depend on the times of the transitions. These types of Markov chains are encountered in several stochastic models, including queueing systems, dams, inventory systems, insurance risk models, etc. We show that for the cases when the time parameters are periodic the systems can be analyzed using some extensions of known results in the matrix-analytic methods literature. We have limited our examples to those relating to queueing systems to allow us a focus. An example application of the model to a real life problem is presented.  相似文献   

12.
This paper presents an asymptotic analysis of hierarchical production planning in a manufacturing system with serial machines that are subject to breakdown and repair, and with convex costs. The machines capacities are modeled as Markov chains. Since the number of parts in the internal buffers between any two machines needs to be non-negative, the problem is inherently a state constrained problem. As the rate of change in machines states approaches infinity, the analysis results in a limiting problem in which the stochastic machines capacity is replaced by the equilibrium mean capacity. A method of “lifting” and “modification” is introduced in order to construct near optimal controls for the original problem by using near optimal controls of the limiting problem. The value function of the original problem is shown to converge to the value function of the limiting problem, and the convergence rate is obtained based on some a priori estimates of the asymptotic behavior of the Markov chains. As a result, an error estimate can be obtained on the near optimality of the controls constructed for the original problem.  相似文献   

13.
Within the set of discrete-time Markov chains, a Markov chain is embeddable in case its transition matrix has at least one root that is a stochastic matrix. The present paper examines the embedding problem for discrete-time Markov chains with three states and with real eigenvalues. Sufficient embedding conditions are proved for diagonalizable transition matrices as well as for non-diagonalizable transition matrices and for all possible configurations regarding the sign of the eigenvalues. The embedding conditions are formulated in terms of the projections and the spectral decomposition of the transition matrix.  相似文献   

14.
Starting with finite Markov chains on partitions of a natural number n we construct, via a scaling limit transition as n → ∞, a family of infinite-dimensional diffusion processes. The limit processes are ergodic; their stationary distributions, the so-called z-measures, appeared earlier in the problem of harmonic analysis for the infinite symmetric group. The generators of the processes are explicitly described.  相似文献   

15.
This work develops asymptotic expansions for solutions of systems of backward equations of time- inhomogeneous Maxkov chains in continuous time. Owing to the rapid progress in technology and the increasing complexity in modeling, the underlying Maxkov chains often have large state spaces, which make the computa- tional tasks ihfeasible. To reduce the complexity, two-time-scale formulations are used. By introducing a small parameter ε〉 0 and using suitable decomposition and aggregation procedures, it is formulated as a singular perturbation problem. Both Markov chains having recurrent states only and Maxkov chains including also tran- sient states are treated. Under certain weak irreducibility and smoothness conditions of the generators, the desired asymptotic expansions axe constructed. Then error bounds are obtained.  相似文献   

16.
This paper is devoted to the functional analytic approach to the problem of existence of Markov processes with Dirichlet boundary condition, oblique derivative boundary condition and first-order Wentzell boundary condition for second-order, uniformly elliptic differential operators with discontinuous coefficients. More precisely, we construct Feller semigroups associated with absorption, reflection, drift and sticking phenomena at the boundary. The approach here is distinguished by the extensive use of the ideas and techniques characteristic of the recent developments in the Calderon- Zygmund theory of singular integral operators with non-smooth kernels.  相似文献   

17.
In many applications of absorbing Markov chains, solution of the problem at hand involves finding the mean time to absorption. Moreover, in almost all real world applications of Markov chains, accurate estimation of the elements of the probability matrix is a major concern. This paper develops a technique that provides close estimates of the mean number of stages before absorption with only the row sums of the transition matrix of transient states.  相似文献   

18.
Persi Diaconis and Phil Hanlon in their interesting paper(4) give the rates of convergence of some Metropolis Markov chains on the cubeZ d (2). Markov chains on finite groups that are actually random walks are easier to analyze because the machinery of harmonic analysis is available. Unfortunately, Metropolis Markov chains are, in general, not random walks on group structure. In attempting to understand Diaconis and Hanlon's work, the authors were led to the idea of a hypergroup deformation of a finite groupG, i.e., a continuous family of hypergroups whose underlying space isG and whose structure is naturally related to that ofG. Such a deformation is provided forZ d (2), and it is shown that the Metropolis Markov chains studied by Diaconis and Hanlon can be viewed as random walks on the deformation. A direct application of the Diaconis-Shahshahani Upper Bound Lemma, which applies to random walks on hypergroups, is used to obtain the rate of convergence of the Metropolis chains starting at any point. When the Markov chains start at 0, a result in Diaconis and Hanlon(4) is obtained with exactly the same rate of convergence. These results are extended toZ d (3).Research supported in part by the Office of Research and Sponsored Programs, University of Oregon.  相似文献   

19.
A methodology is proposed that is suitable for efficient simulation of continuous-time Markov chains that are nearly-completely decomposable. For such Markov chains the effort to adequately explore the state space via Crude Monte Carlo (CMC) simulation can be extremely large. The purpose of this paper is to provide a fast alternative to the standard CMC algorithm, which we call Aggregate Monte Carlo (AMC). The idea of the AMC algorithm is to reduce the jumping back and forth of the Markov chain in small subregions of the state space. We accomplish this by aggregating such problem regions into single states. We discuss two methods to identify collections of states where the Markov chain may become ‘trapped’: the stochastic watershed segmentation from image analysis, and a graph-theoretic decomposition method. As a motivating application, we consider the problem of estimating the charge carrier mobility of disordered organic semiconductors, which contain low-energy regions in which the charge carrier can quickly become stuck. It is shown that the AMC estimator for the charge carrier mobility reduces computational costs by several orders of magnitude compared to the CMC estimator.  相似文献   

20.
运用漂移布朗族的高维狄利克莱问题的数值解   总被引:5,自引:0,他引:5  
本文对高维狄利克莱问题的数值解提出了一种新的有效的求解方法。这种方法运用了解的随机表达式、球面击中时和位置的分布以及漂移布朗族的强马氏性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号