Average optimality for continuous-time Markov decision processes with a policy iteration approach |
| |
Authors: | Quanxin Zhu |
| |
Affiliation: | Department of Mathematics, South China Normal University, Guangzhou 510631, PR China |
| |
Abstract: | This paper deals with the average expected reward criterion for continuous-time Markov decision processes in general state and action spaces. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. We give conditions on the system's primitive data and under which we prove the existence of the average reward optimality equation and an average optimal stationary policy. Also, under our conditions we ensure the existence of ?-average optimal stationary policies. Moreover, we study some properties of average optimal stationary policies. We not only establish another average optimality equation on an average optimal stationary policy, but also present an interesting “martingale characterization” of such a policy. The approach provided in this paper is based on the policy iteration algorithm. It should be noted that our way is rather different from both the usually “vanishing discounting factor approach” and the “optimality inequality approach” widely used in the previous literature. |
| |
Keywords: | Continuous-time Markov decision process Policy iteration algorithm Average criterion Optimality equation Optimal stationary policy |
本文献已被 ScienceDirect 等数据库收录! |
|