Stochastic approximations of constrained discounted Markov decision processes |
| |
Authors: | Franç ois Dufour,Tomá s Prieto-Rumeau |
| |
Affiliation: | 1. Université Bordeaux I, INRIA Bordeaux Sud Ouest, France;2. Statistics Department, UNED, Madrid, Spain |
| |
Abstract: | We consider a discrete-time constrained Markov decision process under the discounted cost optimality criterion. The state and action spaces are assumed to be Borel spaces, while the cost and constraint functions might be unbounded. We are interested in approximating numerically the optimal discounted constrained cost. To this end, we suppose that the transition kernel of the Markov decision process is absolutely continuous with respect to some probability measure μ . Then, by solving the linear programming formulation of a constrained control problem related to the empirical probability measure μn of μ, we obtain the corresponding approximation of the optimal constrained cost. We derive a concentration inequality which gives bounds on the probability that the estimation error is larger than some given constant. This bound is shown to decrease exponentially in n. Our theoretical results are illustrated with a numerical application based on a stochastic version of the Beverton–Holt population model. |
| |
Keywords: | Constrained Markov decision processes Linear programming approach to control problems Approximation of Markov decision processes |
本文献已被 ScienceDirect 等数据库收录! |
|