首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces an error propagation formula of a certain class of multi-level iterative aggregation–disaggregation (IAD) methods for numerical solutions of stationary probability vectors of discrete finite Markov chains. The formula can be used to investigate convergence by computing the spectral radius of the error propagation matrix for specific Markov chains. Numerical experiments indicate that the same type of the formula could be used for a wider class of the multi-level IAD methods. Using the formula we show that for given data there is no relation between convergence of two-level and of multi-level IAD methods.  相似文献   

2.
Equivalence is shown between different conditions for convergence of iterative methods for consistent singular systems of linear equations on Banach spaces. These systems appear in many applications, such as Markov chains and Markov processes. The conditions considered relate the range and null spaces of different operators.  相似文献   

3.
We present a new class of interacting Markov chain Monte Carlo methods to approximate numerically discrete-time nonlinear measure-valued equations. These stochastic processes belong to the class of self-interacting Markov chains with respect to their occupation measures. We provide several convergence results for these new methods including exponential estimates and a uniform convergence theorem with respect to the time parameter, yielding what seems to be the first results of this kind for this type of self-interacting models. We illustrate these models in the context of Feynman–Kac distribution semigroups arising in physics, biology and in statistics.  相似文献   

4.
The use of block two-stage methods for the iterative solution of consistent singular linear systems is studied. In these methods, suitable for parallel computations, different blocks, i.e., smaller linear systems, can be solved concurrently by different processors. Each of these smaller systems are solved by an (inner) iterative method. Hypotheses are provided for the convergence of non-stationary methods, i.e., when the number of inner iterations may vary from block to block and from one outer iteration to another. It is shown that the iteration matrix corresponding to one step of the block method is convergent, i.e., that its powers converge to a limit matrix. A theorem on the convergence of the infinite product of matrices with the same eigenspace corresponding to the eigenvalue 1 is proved, and later used as a tool in the convergence analysis of the block method. The methods studied can be used to solve any consistent singular system, including discretizations of certain differential equations. They can also be used to find stationary probability distribution of Markov chains. This last application is considered in detail.  相似文献   

5.
Some posterior distributions lead to Markov chain Monte Carlo (MCMC) chains that are naturally viewed as collections of subchains. Examples include mixture models, regime-switching models, and hidden Markov models. We obtain MCMC-based estimators of posterior expectations by combining different subgroup (subchain) estimators using stratification and poststratification methods. Variance estimates of the limiting distributions of such estimators are developed. Based on these variance estimates, we propose a test statistic to aid in the assessment of convergence and mixing of chains. We compare our diagnostic with other commonly used methods. The approach is illustrated in two examples: a latent variable model for arsenic concentration in public water systems in Arizona and a Bayesian hierarchical model for Pacific sea surface temperatures. Supplementary materials, which include MATLAB codes for the proposed method, are available online.  相似文献   

6.
Decision-making in an environment of uncertainty and imprecision for real-world problems is a complex task. In this paper it is introduced general finite state fuzzy Markov chains that have a finite convergence to a stationary (may be periodic) solution. The Cesaro average and the -potential for fuzzy Markov chains are defined, then it is shown that the relationship between them corresponds to the Blackwell formula in the classical theory of Markov decision processes. Furthermore, it is pointed out that recurrency does not necessarily imply ergodicity. However, if a fuzzy Markov chain is ergodic, then the rows of its ergodic projection equal the greatest eigen fuzzy set of the transition matrix. Then, the fuzzy Markov chain is shown to be a robust system with respect to small perturbations of the transition matrix, which is not the case for the classical probabilistic Markov chains. Fuzzy Markov decision processes are finally introduced and discussed.  相似文献   

7.
As a main step in the numerical solution of control problems in continuous time, the controlled process is approximated by sequences of controlled Markov chains, thus discretising time and space. A new feature in this context is to allow for delay in the dynamics. The existence of an optimal strategy with respect to the cost functional can be guaranteed in the class of relaxed controls. Weak convergence of the approximating extended Markov chains to the original process together with convergence of the associated optimal strategies is established.  相似文献   

8.
Summary. By performing an accurate analysis of the convergence, we give a complete theoretical explanation of the experimental behaviour of functional iteration techniques for the computation of the minimal nonnegative solution of the matrix equation , arising in the numerical solution of M/G/1 type Markov chains (here the 's are nonnegative matrices such that the matrix is column stochastic). Moreover, we introduce a general class of functional iteration methods, which includes the standard methods, and we give an optimality convergence result in this class. Received September 1, 1995 / Revised version received September 9, 1996  相似文献   

9.
Information theoretic methods are used to prove convergence in information divergence of reversible Markov chains. Also some ergodic theorems for information divergence are proved.   相似文献   

10.
Let \((\xi _n)_{n=0}^\infty \) be a nonhomogeneous Markov chain taking values in a finite state-space \(\mathbf {X}=\{1,2,\ldots ,b\}\). In this paper, we will study the generalized entropy ergodic theorem with almost-everywhere and \(\mathcal {L}_1\) convergence for nonhomogeneous Markov chains; this generalizes the corresponding classical results for Markov chains.  相似文献   

11.
1 IntroductionOne of tlle fundamental issues about the theory of evolutionary algorithIns is their conver-gence. At present, the theoretical basis about evolutionary computation is still yery wcak['--'],especial1y for the researches into the convergeIlce rates of evolutionary a1gorithm8. Accord-illg to the literature, tllere are few results about the convergence rates[4--9], but this research..area i8 very importallt in both theory and practice. Xu and Li[l0] pointed out that the studyof the…  相似文献   

12.
Data assimilation method, as commonly used in numerical ocean and atmospheric circulation models, produces an estimation of state variables in terms of stochastic processes. This estimation is based on limit properties of a diffusion-type process which follows from the convergence of a sequence of Markov chains with jumps. The conditions for this convergence are investigated. The optimisation problem and the optimal filtering problem associated with the search of the best possible approximation of the true state variable are posed and solved. The results of a simple numerical experiment are discussed. It is shown that the proposed data assimilation method works properly and can be used in practical applications, particularly in meteorology and oceanography.  相似文献   

13.
We study the long run behaviour of interactive Markov chains on infinite product spaces. In view of microstructure models of financial markets, the interaction has both a local and a global component. The convergence of such Markov chains is analyzed on the microscopic level and on the macroscopic level of empirical fields. We give sufficient conditions for convergence on the macroscopic level. Using a perturbation of the Dobrushin–Vasserstein contraction technique we show that macroscopic convergence implies weak convergence of the underlying Markov chain. This extends the basic convergence theorem of Vasserstein for locally interacting Markov chains to the case where an additional global component appears in the interaction.  相似文献   

14.
In this paper, we develop an algorithmic method for the evaluation of the steady state probability vector of a special class of finite state Markov chains. For the class of Markov chains considered here, it is assumed that the matrix associated with the set of linear equations for the steady state probabilities possess a special structure, such that it can be rearranged and decomposed as a sum of two matrices, one lower triangular with nonzero diagonal elements, and the other an upper triangular matrix with only very few nonzero columns. Almost all Markov chain models of queueing systems with finite source and/or finite capacity and first-come-first-served or head of the line nonpreemptive priority service discipline belongs to this special class.  相似文献   

15.
本文给出了一类非时齐的Markov链的强逼近.作为应用,建立了临床试验中Markov链自适应设计的强相合性,重对数律和弱收敛.  相似文献   

16.
A class of numerical methods for nonlinear boundary-value problems for retarded differential equations with a parameter is considered. Sufficient conditions for convergence and error estimates are given  相似文献   

17.
Nonnegative tensors arise very naturally in many applications that involve large and complex data flows. Due to the relatively small requirement in terms of memory storage and number of operations per step, the (shifted) higher order power method is one of the most commonly used technique for the computation of positive Z‐eigenvectors of this type of tensors. However, unlike the matrix case, the method may fail to converge even for irreducible tensors. Moreover, when it converges, its convergence rate can be very slow. These two drawbacks often make the computation of the eigenvectors demanding or unfeasible for large problems. In this work, we consider a particular class of nonnegative tensors associated with the multilinear PageRank modification of higher order Markov chains. Based on the simplified topological ε‐algorithm in its restarted form, we introduce an extrapolation‐based acceleration of power method type algorithms, namely, the shifted fixed‐point method and the inner‐outer method. The accelerated methods show remarkably better performance, with faster convergence rates and reduced overall computational time. Extensive numerical experiments on synthetic and real‐world datasets demonstrate the advantages of the introduced extrapolation techniques.  相似文献   

18.
We start with a discussion of coupled algebraic Riccati equations arising in the study of linear-quadratic optimal control problems for Markov jump linear systems. Under suitable assumptions, this system of equations has a unique positive semidefinite solution, which is the solution of practical interest. The coupled equations can be rewritten as a single linearly perturbed matrix Riccati equation with special structures. We study the linearly perturbed Riccati equation in a more general setting and obtain a class of iterative methods from different splittings of a positive operator involved in the Riccati equation. We prove some special properties of the sequences generated by these methods and determine and compare the convergence rates of these methods. Our results are then applied to the coupled Riccati equations of jump linear systems. We obtain linear convergence of the Lyapunov iteration and the modified Lyapunov iteration, and confirm that the modified Lyapunov iteration indeed has faster convergence than the original Lyapunov iteration.  相似文献   

19.
A general theorem concerning the almost sure convergence of some nonhomogeneous Markov chains, whose conditional distributions satisfy a certain convergence condition, is given. This result applied to branching processes with infinite mean yields almost sure convergence for a large class of processes converging in distribution, as well as a characterization of the limiting distribution function.  相似文献   

20.
By proving a local limit theorem for higher-order transitions, we determine the time required for necklace chains to be close to stationarity. Because necklace chains, built by arranging identical smaller Markov chains around a directed cycle, are not reversible, have little symmetry, do not have uniform stationary distributions, and can be nearly periodic, prior general bounds on rates of convergence of Markov chains either do not apply or give poor bounds. Necklace chains can serve as test cases for future techniques for bounding rates of convergence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号