首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We introduce the concept of a Markov risk measure and we use it to formulate risk-averse control problems for two Markov decision models: a finite horizon model and a discounted infinite horizon model. For both models we derive risk-averse dynamic programming equations and a value iteration method. For the infinite horizon problem we develop a risk-averse policy iteration method and we prove its convergence. We also propose a version of the Newton method to solve a nonsmooth equation arising in the policy iteration method and we prove its global convergence. Finally, we discuss relations to min–max Markov decision models.  相似文献   

2.
We study infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled continuous time Markov chains on a countable state space. For the discounted-cost game, we prove the existence of value and saddle-point equilibrium in the class of Markov strategies under nominal conditions. For the ergodic-cost game, we prove the existence of values and saddle point equilibrium by studying the corresponding Hamilton-Jacobi-Isaacs equation under a certain Lyapunov condition.  相似文献   

3.
Suppose we observe a stationary Markov chain with unknown transition distribution. The empirical estimator for the expectation of a function of two successive observations is known to be efficient. For reversible Markov chains, an appropriate symmetrization is efficient. For functions of more than two arguments, these estimators cease to be efficient. We determine the influence function of efficient estimators of expectations of functions of several observations, both for completely unknown and for reversible Markov chains. We construct simple efficient estimators in both cases.  相似文献   

4.
We propose the construction of a quantum Markov chain that corresponds to a “forward” quantum Markov chain. In the given construction, the quantum Markov chain is defined as the limit of finite-dimensional states depending on the boundary conditions. A similar construction is widely used in the definition of Gibbs states in classical statistical mechanics. Using this construction, we study the quantum Markov chain associated with an XY-model on a Cayley tree. For this model, within the framework of the given construction, we prove the uniqueness of the quantum Markov chain, i.e., we show that the state is independent of the boundary conditions.  相似文献   

5.
For additive functionals defined on a sequence of Markov chains that approximate a Markov process, we establish the convergence of functionals under the condition of local convergence of their characteristics (mathematical expectations).  相似文献   

6.
A maximum out forest of a digraph is its spanning subgraph that consists of disjoint diverging trees and has the maximum possible number of arcs. For an arbitrary weighted digraph, we consider a matrix of specific weights of maximum out forests and demonstrate how this matrix can be used to get a graph-theoretic interpretation for the limiting probabilities of Markov chains. For a special (nonclassical) correspondence between Markov chains and weighted digraphs, the matrix of Cesáro limiting transition probabilities of any finite homogeneous Markov chain coincides with the normalized matrix of maximum out forests of the corresponding digraphs. This provides a finite (combinatorial) method to calculate the limiting probabilities of Markov chains and thus their stationary distributions. On the other hand, the Markov chain technique provides the proofs to some statements about digraphs.  相似文献   

7.
遗传算法过早收敛现象的马氏链分析   总被引:1,自引:0,他引:1  
赵小艳  聂赞坎 《数学季刊》2003,18(4):364-368
GeneticAlgorithmsarealtitudeparalleling ,self_adaptingandrandomsearchmethodsthatbasedonideasfromnaturalchoiceandnaturalgenetics.Theyarealsobionicoptimumalgo rithmsdrewonbiologicalevolutionespeciallygenetictermsandprincipal.ThedefinitionofconvergenceofGAshadmayvarieties ,includingconvergenceindistribution ,inprobability ,inprobability 1andconvergencealmosteverywhere ,etc ..EvenforGAsmodel,differentdefini tionhaddifferentlimit.Itmightbeglobaloptimalsolution ,localoptimalsolutionornonopti malso…  相似文献   

8.
This paper suggests a generalized semi‐Markov model for manpower planning, which could be adopted in cases of unavailability of candidates with the desired qualifications/experience, as well as in cases where an organization provides training opportunities to its personnel. In this context, we incorporate training classes into the framework of a non‐homogeneous semi‐Markov system and we introduce an additional, external semi‐Markov system providing the former with potential recruits. For the model above, referred to as the Augmented Semi‐Markov System, we derive the equations that reflect the expected number of persons in each grade and we also investigate its limiting population structure. An illustrative example is provided. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
A theorem is proved which establishes the conditions for a Gaussian vector stationary process to be Markovian. For a stationary process with finite generalized Markov property we construct a vector Markov process whose first coordinate coincides with the given process. Applying our theorem to the vector process, we derive formulas for the linear predictor of a process with finite generalized Markov property.Translated from Statisticheskie Metody, pp. 82–90, 1980.I would like to acknowledge the helpful attention of P. N. Sapozhnikov.  相似文献   

10.
There exists a deep relationship between the nonexplosion conditions for Markov evolution in classical and quantum probability theories. Both of these conditions are equivalent to the preservation of the unit operator (total probability) by a minimal Markov semigroup. In this work, we study the Heisenberg evolution describing an interaction between the chain ofN two-level atoms andn-mode external Bose field, which was considered recently by J. Alli and J. Sewell. The unbounded generator of the Markov evolution of observables acts in the von Neumann algebra. For the generator of a Markov semigroup, we prove a nonexplosion condition, which is the operator analog of a similar condition suggested by R. Z. Khas’minski and later by T. Taniguchi for classical stochastic processes. For the operator algebra situation, this condition ensures the uniqueness and conservativity of the quantum dynamical semigroup describing the Markov evolution of a quantum system. In the regular case, the nonexplosion condition establishes a one-to-one relation between the formal generator and the infinitesimal operator of the Markov semigroup. Translated fromMatematicheskie Zemetki, Vol. 67, No. 5, pp. 788–796, May, 2000.  相似文献   

11.
12.
For a stochastic differential equation with non-Lipschitz coefficients, we construct, by Euler scheme, a measurable flow of the solution, and we prove the solution is a Markov process.  相似文献   

13.
本主要讨论了依状态独立的随机环境中的马氏链,并严格地证明了依状态独立的随机环境中的马氏链,如果在环境不退化的一般情形下,不是时齐马氏链,而环境退化必然是马氏链这个结论。  相似文献   

14.
Strongly excessive functions play an important role in the theory of Markov decision processes and Markov games. In this paper the following question is investigated: What are the properties of Markov decision processes which possess a strongly excessive function? A probabilistic characterization is presented in the form of a random drift through a partitioned state space. For strongly excessive functions which have a positive lower bound a characterization is given in terms of the lifetime distribution of the process.Finally we give a characterization in terms of the spectral radius.  相似文献   

15.
Velicu  Andrei 《Potential Analysis》2022,56(1):165-190
Potential Analysis - Firstly we consider a finite dimensional Markov semigroup generated by Dunkl Laplacian with drift terms. For this semigroup we prove gradient bounds involving a symmetrised...  相似文献   

16.
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, …  相似文献   

17.
For a Markov transition kernel P and a probability distribution μ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel $P_{\mu} = \sum_k \mu(k)P^k.$ In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker’s and Metropolis algorithms in terms of asymptotic variance.  相似文献   

18.
We determine the convergence speed of a numerical scheme for approximating one-dimensional continuous strong Markov processes. The scheme is based on the construction of certain Markov chains whose laws can be embedded into the process with a sequence of stopping times. Under a mild condition on the process' speed measure we prove that the approximating Markov chains converge at fixed times at the rate of 1/4 with respect to every p-th Wasserstein distance. For the convergence of paths, we prove any rate strictly smaller than 1/4. Our results apply, in particular, to processes with irregular behavior such as solutions of SDEs with irregular coefficients and processes with sticky points.  相似文献   

19.
For continuous-time Markov chains, we provide criteria for non-ergodicity, non-algebraic ergodicity, non-exponential ergodicity, and non-strong ergodicity. For discrete-time Markov chains, criteria for non-ergodicity, non-algebraic ergodicity, and non-strong ergodicity are given. Our criteria are in terms of the existence of solutions to inequalities involving the Q-matrix (or transition matrix P in time-discrete case) of the chain. Meanwhile, these practical criteria are applied to some examples, including a special class of single birth processes and several multi-dimensional models.  相似文献   

20.
A random chaotic interval map with noise which causes coarse-graining induces a finite-state Markov chain. For a map topologically conjugate to a piecewise-linear map with the Lebesgue measure being ergodic, we prove that the Shannon entropy for the induced Markov chain possesses a finite limit as the noise level tends to zero. In most cases, the limit turns out to be strictly greater than the Lyapunov exponent of the original map without noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号