Sample-path optimality and variance-maximization for Markov decision processes |
| |
Authors: | Q X Zhu |
| |
Institution: | (1) Department of Mathematics, South China Normal University, Guangzhou, 510631, People’s Republic of China |
| |
Abstract: | This paper studies both the average sample-path reward (ASPR) criterion and the limiting average variance criterion for denumerable discrete-time Markov decision processes. The rewards may have neither upper nor lower bounds. We give sufficient conditions on the system’s primitive data and under which we prove the existence of ASPR-optimal stationary policies and variance optimal policies. Our conditions
are weaker than those in the previous literature. Moreover, our results are illustrated by a controlled queueing system.
Research partially supported by the Natural Science Foundation of Guangdong Province (Grant No: 06025063) and the Natural
Science Foundation of China (Grant No: 10626021). |
| |
Keywords: | Discrete-time Markov decision process Unbounded reward Sample-path reward criterion Variance-maximization Optimal stationary policy |
本文献已被 SpringerLink 等数据库收录! |