首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 99 毫秒
1.
郭先平  戴永隆 《数学学报》2002,45(1):171-182
本文考虑的是转移速率族任意且费用率函数可能无界的连续时间马尔可夫决策过程的折扣模型.放弃了传统的要求相应于每个策略的 Q -过程唯一等条件,而首次考虑相应每个策略的 Q -过程不一定唯一, 转移速率族也不一定保守, 费用率函数可能无界, 且允许行动空间非空任意的情形. 本文首次用"α-折扣费用最优不等式"更新了传统的α-折扣费用最优方程,并用"最优不等式"和新的方法,不仅证明了传统的主要结果即最优平稳策略的存在性, 而且还进一步探讨了( ∈>0  )-最优平稳策略,具有单调性质的最优平稳策略, 以及(∈≥0) -最优决策过程的存在性, 得到了一些有意义的新结果. 最后, 提供了一个迁移率受控的生灭系统例子, 它满足本文的所有条件, 而传统的假设(见文献[1-14])均不成立.  相似文献   

2.
郭先平 《数学学报》2001,44(2):333-342
本文考虑具有 Borel状态空间和行动空间非平稳 MDP的平均方差准则.首先,在遍历条件下,利用最优方程,证明了关于平均期望目标最优马氏策略的存在性.然后,通过构造新的模型,利用马氏过程的理论,进一步证明了在关于平均期望目标是最优的一类马氏策略中,存在一个马氏策略使得平均方差达到最小.作为本文的特例还得到了 Dynkin E. B.和 Yushkevich A. A.及 Kurano M.等中的主要结果.  相似文献   

3.
本文研究在预报更新环境下具有快、慢两种配送方式和需求预报更新的库存系统,为了得到更多关于费用参数和预报改进对最优定货量以及最优的平均费用的影响,我们考虑两个周期的情形.以动态规划为工具我们得到了系统的最优策略.对于需求预报服从均匀分布情形,本文得到了最优定货量和最优的平均总费用的精确表达式.我们通过一些数值例子来说明库存费用、罚金、需求的预报改进和预报误差对最优定货量和最优的  相似文献   

4.
本文考虑可数状态空间非平稳马尔可夫决策过程(MDP)的平均目标.首先,我们指出并改正了Park,et,al[1]和Alden,etal[2]的错误,并在弱于Park,etal[1]的条件下,借助于新建立的最优方程,证明了最优平均值的收敛性和平均最优马氏策略的存在性.其次,给出了ε(>0)-平均最优马氏策略的滚动式算法.  相似文献   

5.
本文考虑的是Hinderer提出的状态空间和行动空间均业般集的非平稳MDP平均模型,利用扩大状态空间的方法,建立了此模型的最优方程,并给出了最优方程有解及蜞 最优策略存在的条件,从最优方程出发,用概率的方法证明了最优策略的存在性,最后还提供了此模型的值迭代算法及其收敛性证明,从而推广了Smith。L.Lassere,B「3」及Larma^「6」等的主要结果。  相似文献   

6.
郭先平 《数学学报》2000,43(2):269-274
本文考虑的是可数状态空间任意行动空间非平稳MDP平均模型,借鉴于Feinberg E. A(1994)的思想,提出了比马氏策略和 Feinberg E. A的(f,B)-生成策略和更为广泛的(G,B)-生成策略的概念,在弱遍历条件下,用概率分析的方法,证明了一致最优(G,B)-生成策略的存在性.从而将 Feinberg E. A.(1994)的主要结果推广到非平衡可数状态空间情形.  相似文献   

7.
非负费用折扣半马氏决策过程   总被引:1,自引:0,他引:1  
黄永辉  郭先平 《数学学报》2010,53(3):503-514
本文考虑可数状态非负费用的折扣半马氏决策过程.首先在给定半马氏决策核和策略下构造一个连续时间半马氏决策过程,然后用最小非负解方法证明值函数满足最优方程和存在ε-最优平稳策略,并进一步给出最优策略的存在性条件及其一些性质.最后,给出了值迭代算法和一个数值算例.  相似文献   

8.
本文利用代数方法获得二子区域情形带松驰因子的加法型Schwarz交替方向法的最优松驰因子,结果表明代数平均是最优的.接着通过反例说明该结果不可推广至多子区域情形.最后,本文将该代数技巧用于证明一些现有的重要结果,和原有证明相比,现证简单、直观  相似文献   

9.
非平稳ρ混合序列的完全收敛性   总被引:4,自引:1,他引:3  
王晓明 《数学学报》1996,39(4):463-476
本文进一步研究了ρ-混合序列的完全收敛性,讨论了无界非平稳情形完全收敛性成立的充分和必要条件,获得了理想的结果.  相似文献   

10.
本文考虑的是可数状态空间不完全信息的非平衡MDP平均模型,借助于模型的转化,建立了不完全信息的非平衡MDP平均模型的最优方程,并进一步给出了最优方程的解及ε(〉,0)-最优策略存在的充分条件。  相似文献   

11.
In this paper, we consider the nonstationary Markov decision processes (MDP, for short) with average variance criterion on a countable state space, finite action spaces and bounded one-step rewards. From the optimality equations which are provided in this paper, we translate the average variance criterion into a new average expected cost criterion. Then we prove that there exists a Markov policy, which is optimal in an original average expected reward criterion, that minimizies the average variance in the class of optimal policies for the original average expected reward criterion.  相似文献   

12.
This paper deals with a new optimality criterion consisting of the usual three average criteria and the canonical triplet (totally so-called strong average-canonical optimality criterion) and introduces the concept of a strong average-canonical policy for nonstationary Markov decision processes, which is an extension of the canonical policies of Herna′ndez-Lerma and Lasserre [16] (pages: 77) for the stationary Markov controlled processes. For the case of possibly non-uniformly bounded rewards and denumerable state space, we first construct, under some conditions, a solution to the optimality equations (OEs), and then prove that the Markov policies obtained from the OEs are not only optimal for the three average criteria but also optimal for all finite horizon criteria with a sequence of additional functions as their terminal rewards (i.e. strong average-canonical optimal). Also, some properties of optimal policies and optimal average value convergence are discussed. Moreover, the error bound in average reward between a rolling horizon policy and a strong average-canonical optimal policy is provided, and then a rolling horizon algorithm for computing strong average ε(>0)-optimal Markov policies is given.  相似文献   

13.
This paper studies discrete-time nonlinear controlled stochastic systems, modeled by controlled Markov chains (CMC) with denumerable state space and compact action space, and with an infinite planning horizon. Recently, there has been a renewed interest in CMC with a long-run, expected average cost (AC) optimality criterion. A classical approach to study average optimality consists in formulating the AC case as a limit of the discounted cost (DC) case, as the discount factor increases to 1, i.e., as the discounting effectvanishes. This approach has been rekindled in recent years, with the introduction by Sennott and others of conditions under which AC optimal stationary policies are shown to exist. However, AC optimality is a rather underselective criterion, which completely neglects the finite-time evolution of the controlled process. Our main interest in this paper is to study the relation between the notions of AC optimality andstrong average cost (SAC) optimality. The latter criterion is introduced to asses the performance of a policy over long but finite horizons, as well as in the long-run average sense. We show that for bounded one-stage cost functions, Sennott's conditions are sufficient to guarantee thatevery AC optimal policy is also SAC optimal. On the other hand, a detailed counterexample is given that shows that the latter result does not extend to the case of unbounded cost functions. In this counterexample, Sennott's conditions are verified and a policy is exhibited that is both average and Blackwell optimal and satisfies the average cost inequality.  相似文献   

14.
This paper deals with discrete-time Markov control processes in Borel spaces, with unbounded rewards. The criterion to be optimized is a long-run sample-path (or pathwise) average reward subject to constraints on a long-run pathwise average cost. To study this pathwise problem, we give conditions for the existence of optimal policies for the problem with “expected” constraints. Moreover, we show that the expected case can be solved by means of a parametric family of optimality equations. These results are then extended to the problem with pathwise constraints.  相似文献   

15.
This paper deals with Blackwell optimality for continuous-time controlled Markov chains with compact Borel action space, and possibly unbounded reward (or cost) rates and unbounded transition rates. We prove the existence of a deterministic stationary policy which is Blackwell optimal in the class of all admissible (nonstationary) Markov policies, thus extending previous results that analyzed Blackwell optimality in the class of stationary policies. We compare our assumptions to the corresponding ones for discrete-time Markov controlled processes.  相似文献   

16.
This paper establishes a rather complete optimality theory for the average cost semi-Markov decision model with a denumerable state space, compact metric action sets and unbounded one-step costs for the case where the underlying Markov chains have a single ergotic set. Under a condition which, roughly speaking, requires the existence of a finite set such that the supremum over all stationary policies of the expected time and the total expected absolute cost incurred until the first return to this set are finite for any starting state, we shall verify the existence of a finite solution to the average costs optimality equation and the existence of an average cost optimal stationary policy.  相似文献   

17.
18.
In this paper, applying an interval arithmetic analysis, we consider the average case of controlled Markov set-chains, whose process allows for fluctuating transition matrices at each step in time. We introduce a v-step contractive property for the average case, under which a Pareto optimal periodic policy is characterized as a maximal solution of optimality equation. Also, in the class of stationary policies, the behavior of the expected reward over T-horizon as T approaches ∞ is investigated and the left- and right-hand side optimality equations are given, by which a Pareto optimal stationary policy is found. As a numerical example, the Taxicab problem is considered.  相似文献   

19.
Optimal control problems with a vector performance index and uncertainty in the state equations are investigated. Nature chooses the uncertainty, subject to magnitude bounds. For these problems, a definition of optimality is presented. This definition reduces to that of a minimax control in the case of a scalar cost and to Pareto optimality when there is no uncertainty or disturbance present. Sufficient conditions for a control to satisfy this definition of optimality are derived. These conditions are in terms of a related two-player zero-sum differential game and suggest a technique for determining the optimal control. The results are illustrated with an example.This research was supported by AFOSR under Grant No. 76-2923.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号