首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Sample-path optimality and variance-maximization for Markov decision processes
Authors:Q X Zhu
Institution:(1) Department of Mathematics, South China Normal University, Guangzhou, 510631, People’s Republic of China
Abstract:This paper studies both the average sample-path reward (ASPR) criterion and the limiting average variance criterion for denumerable discrete-time Markov decision processes. The rewards may have neither upper nor lower bounds. We give sufficient conditions on the system’s primitive data and under which we prove the existence of ASPR-optimal stationary policies and variance optimal policies. Our conditions are weaker than those in the previous literature. Moreover, our results are illustrated by a controlled queueing system. Research partially supported by the Natural Science Foundation of Guangdong Province (Grant No: 06025063) and the Natural Science Foundation of China (Grant No: 10626021).
Keywords:Discrete-time Markov decision process  Unbounded reward  Sample-path reward criterion  Variance-maximization  Optimal stationary policy
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号