Discounted continuous-time Markov decision processes with unbounded rates and randomized history-dependent policies: the dynamic programming approach |
| |
Authors: | Alexey Piunovskiy Yi Zhang |
| |
Affiliation: | 1. Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, UK
|
| |
Abstract: | This paper deals with a continuous-time Markov decision process in Borel state and action spaces and with unbounded transition rates. Under history-dependent policies, the controlled process may not be Markov. The main contribution is that for such non-Markov processes we establish the Dynkin formula, which plays important roles in establishing optimality results for continuous-time Markov decision processes. We further illustrate this by showing, for a discounted continuous-time Markov decision process, the existence of a deterministic stationary optimal policy (out of the class of history-dependent policies) and characterizing the value function through the Bellman equation. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|