首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
本文考虑可数状态空间非平稳马尔可夫决策过程(MDP)的平均目标.首先,我们指出并改正了Park,et,al[1]和Alden,etal[2]的错误,并在弱于Park,etal[1]的条件下,借助于新建立的最优方程,证明了最优平均值的收敛性和平均最优马氏策略的存在性.其次,给出了ε(>0)-平均最优马氏策略的滚动式算法.  相似文献   

2.
作者考虑的是任意状态空间,任意行动空间非平稳MDP的平均样本轨道目标。在弱遍历条件下用鞅的极限理论,证明了最优马氏策略的存在性,推广了A.Arapostathis,V.Borkar,E.F.Gaucherand,M.Ghosh,S.Marcus(1993)的主要结果。  相似文献   

3.
郭先平 《数学学报》2000,43(2):269-274
本文考虑的是可数状态空间任意行动空间非平稳MDP平均模型,借鉴于Feinberg E. A(1994)的思想,提出了比马氏策略和 Feinberg E. A的(f,B)-生成策略和更为广泛的(G,B)-生成策略的概念,在弱遍历条件下,用概率分析的方法,证明了一致最优(G,B)-生成策略的存在性.从而将 Feinberg E. A.(1994)的主要结果推广到非平衡可数状态空间情形.  相似文献   

4.
本文考虑的是Hinderer提出的状态空间和行动空间均业般集的非平稳MDP平均模型,利用扩大状态空间的方法,建立了此模型的最优方程,并给出了最优方程有解及蜞 最优策略存在的条件,从最优方程出发,用概率的方法证明了最优策略的存在性,最后还提供了此模型的值迭代算法及其收敛性证明,从而推广了Smith。L.Lassere,B「3」及Larma^「6」等的主要结果。  相似文献   

5.
本文考虑连续时间Markov决策过程折扣模型的均值-方差优化问题.假设状态空间和行动空间均为Polish空间,转移率和报酬率函数均无界.本文的优化目标是在折扣最优平稳策略类里,选取相应方差最小的策略.本文致力于寻找Polish空间下Markov决策过程均值-方差最优策略存在的条件.利用首次进入分解方法,本文证明均值-方差优化问题可以转化为"等价"的期望折扣优化问题,进而得到关于均值-方差优化问题的"最优方程"和均值-方差最优策略的存在性以及它相应的特征.最后,本文给出若干例子说明折扣最优策略的不唯一性和均值-方差最优策略的存在性.  相似文献   

6.
对于一般的MDP模型,本文证明了对任意一族依赖于历史的随机策略所导致的策略测度类的任意凸组合,存在一个随机马氏策略所导致的策略测度,使得相应于它们的平均期望目标,折扣目标以及期望总报酬目标的值均分别相等,推广了E.B.Dynkin和Yushkevich[1],M.Puterman[2],E.Feinberg和A.Shwartz[3],R.Strauch[4],以及董泽清和宋京生[5]等相应的所有结果.然后还进一步证明了关于平均期望目标、折扣目标以及期望总报酬目标的最优策略,它们要么唯一,要么有无穷多个.  相似文献   

7.
本文研究约束折扣半马氏决策规划问题,即在一折扣期望费用约束下,使折扣期望报酬达最大的约束最优问题,假设状态集可数,行动集为紧的非空Borel集,本文给出了p-约束最优策略的充要条件,证明了在适当的假设条件下必存在p-约束最优策略。  相似文献   

8.
本文研究了在一般状态空间具有平均费用的非平稳Markov决策过程,把在平稳情形用补充的折扣模型的最优方程来建立平均费用的最优方程的结果,推广到非平稳的情形.利用这个结果证明了最优策略的存在性.  相似文献   

9.
本文讨论了可数状态空间、可数决策空间、次随机转移率族、有界报酬函数的马氏决策规划(MDP)的折扣模型,给出了一个非ε-最优策略的检验准则.  相似文献   

10.
定义了离散时间折扣多目标马氏决策模型,在加权准则下,证明了存在(n,∞)最优马氏策略;在字典序准则下,利用最优策略的结构性质,将其最优问题转化为一系列单目标模型的最优问题。  相似文献   

11.
In this paper, we consider the nonstationary Markov decision processes (MDP, for short) with average variance criterion on a countable state space, finite action spaces and bounded one-step rewards. From the optimality equations which are provided in this paper, we translate the average variance criterion into a new average expected cost criterion. Then we prove that there exists a Markov policy, which is optimal in an original average expected reward criterion, that minimizies the average variance in the class of optimal policies for the original average expected reward criterion.  相似文献   

12.
1.IntroductionandModelTheearlierliteratureaboutconstrainedMarkovdecisionprocesses(MDPs,forshort)canbefoundinDerman'sbook[1].Later,therehavebeenmanyachievementsinthisarea.Forexample,averagerewardMDPswithaconstrainthasbeendiscussedbyBeutleandRosslz],HordijkandKallenberg[3]jAltmanandSchwartz[4],etal.Inthecaseoffinitestatespace,discountedrewardcriterionMDPswithaconstrainthasbeentreatedbyKallenberg['landTanaka[6],etal.Whenstatespaceisdenumerable,suchproblemswerediscussedbySennott[71andAlt…  相似文献   

13.
This paper deals with denumerable-state continuous-time controlled Markov chains with possibly unbounded transition and reward rates. It concerns optimality criteria that improve the usual expected average reward criterion. First, we show the existence of average reward optimal policies with minimal average variance. Then we compare the variance minimization criterion with overtaking optimality. We present an example showing that they are opposite criteria, and therefore we cannot optimize them simultaneously. This leads to a multiobjective problem for which we identify the set of Pareto optimal policies (also known as nondominated policies).  相似文献   

14.
This paper deals with a new optimality criterion consisting of the usual three average criteria and the canonical triplet (totally so-called strong average-canonical optimality criterion) and introduces the concept of a strong average-canonical policy for nonstationary Markov decision processes, which is an extension of the canonical policies of Herna′ndez-Lerma and Lasserre [16] (pages: 77) for the stationary Markov controlled processes. For the case of possibly non-uniformly bounded rewards and denumerable state space, we first construct, under some conditions, a solution to the optimality equations (OEs), and then prove that the Markov policies obtained from the OEs are not only optimal for the three average criteria but also optimal for all finite horizon criteria with a sequence of additional functions as their terminal rewards (i.e. strong average-canonical optimal). Also, some properties of optimal policies and optimal average value convergence are discussed. Moreover, the error bound in average reward between a rolling horizon policy and a strong average-canonical optimal policy is provided, and then a rolling horizon algorithm for computing strong average ε(>0)-optimal Markov policies is given.  相似文献   

15.
Abstract

This article deals with the limiting average variance criterion for discrete-time Markov decision processes in Borel spaces. The costs may have neither upper nor lower bounds. We propose another set of conditions under which we prove the existence of a variance minimal policy in the class of average expected cost optimal stationary policies. Our conditions are weaker than those in the previous literature. Moreover, some sufficient conditions for the existence of a variance minimal policy are imposed on the primitive data of the model. In particular, the stochastic monotonicity condition in this paper has been first used to study the limiting average variance criterion. Also, the optimality inequality approach provided here is different from the “optimality equation approach” widely used in the previous literature. Finally, we use a controlled queueing system to illustrate our results.  相似文献   

16.
In this paper, we consider a mean–variance optimization problem for Markov decision processes (MDPs) over the set of (deterministic stationary) policies. Different from the usual formulation in MDPs, we aim to obtain the mean–variance optimal policy that minimizes the variance over a set of all policies with a given expected reward. For continuous-time MDPs with the discounted criterion and finite-state and action spaces, we prove that the mean–variance optimization problem can be transformed to an equivalent discounted optimization problem using the conditional expectation and Markov properties. Then, we show that a mean–variance optimal policy and the efficient frontier can be obtained by policy iteration methods with a finite number of iterations. We also address related issues such as a mutual fund theorem and illustrate our results with an example.  相似文献   

17.
This paper studies both the average sample-path reward (ASPR) criterion and the limiting average variance criterion for denumerable discrete-time Markov decision processes. The rewards may have neither upper nor lower bounds. We give sufficient conditions on the system’s primitive data and under which we prove the existence of ASPR-optimal stationary policies and variance optimal policies. Our conditions are weaker than those in the previous literature. Moreover, our results are illustrated by a controlled queueing system. Research partially supported by the Natural Science Foundation of Guangdong Province (Grant No: 06025063) and the Natural Science Foundation of China (Grant No: 10626021).  相似文献   

18.
This paper deals with the average expected reward criterion for continuous-time Markov decision processes in general state and action spaces. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. We give conditions on the system's primitive data and under which we prove the existence of the average reward optimality equation and an average optimal stationary policy. Also, under our conditions we ensure the existence of ?-average optimal stationary policies. Moreover, we study some properties of average optimal stationary policies. We not only establish another average optimality equation on an average optimal stationary policy, but also present an interesting “martingale characterization” of such a policy. The approach provided in this paper is based on the policy iteration algorithm. It should be noted that our way is rather different from both the usually “vanishing discounting factor approach” and the “optimality inequality approach” widely used in the previous literature.  相似文献   

19.
Abstract

In this article, we study continuous-time Markov decision processes in Polish spaces. The optimality criterion to be maximized is the expected discounted criterion. The transition rates may be unbounded, and the reward rates may have neither upper nor lower bounds. We provide conditions on the controlled system's primitive data under which we prove that the transition functions of possibly non-homogeneous continuous-time Markov processes are regular by using Feller's construction approach to such transition functions. Then, under continuity and compactness conditions we prove the existence of optimal stationary policies by using the technique of extended infinitesimal operators associated with the transition functions of possibly non-homogeneous continuous-time Markov processes, and also provide a recursive way to compute (or at least to approximate) the optimal reward values. The conditions provided in this paper are different from those used in the previous literature, and they are illustrated with an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号