Continuous-Time Markov Decision Processes with Unbounded Transition and Discounted-Reward Rates |
| |
Authors: | Hao Yan Junyu Zhang |
| |
Institution: | 1. Department of Electronics and Computer Engineering , Hong Kong University of Science and Technology , Hong Kong;2. School of Mathematics and Computational Science , Zhongshan University , Guangzhou, People's Republic of China |
| |
Abstract: | Abstract In this article, we study continuous-time Markov decision processes in Polish spaces. The optimality criterion to be maximized is the expected discounted criterion. The transition rates may be unbounded, and the reward rates may have neither upper nor lower bounds. We provide conditions on the controlled system's primitive data under which we prove that the transition functions of possibly non-homogeneous continuous-time Markov processes are regular by using Feller's construction approach to such transition functions. Then, under continuity and compactness conditions we prove the existence of optimal stationary policies by using the technique of extended infinitesimal operators associated with the transition functions of possibly non-homogeneous continuous-time Markov processes, and also provide a recursive way to compute (or at least to approximate) the optimal reward values. The conditions provided in this paper are different from those used in the previous literature, and they are illustrated with an example. |
| |
Keywords: | Discounted reward criterion General state space Optimal stationary policy Q-process |
|
|