首页 | 本学科首页   官方微博 | 高级检索  
     


On the value function for optimal control problems with infinite horizon
Authors:A. Leitã  o
Abstract:We consider optimal control problems of infinite horizon type, whose control laws are given by L1loc‐functions and whose objective function has the meaning of a discounted utility. Our main objective is the verification of the fact that the value function is a viscosity solution of the Hamilton‐Jacobi‐Bellman (HJB) equation in this framework. The usual final condition for the HJB‐equation in the finite horizon case (V (T, x) = 0 or V (T, x) = g(x)) has to be substituted by a decay condition at the infinity. Following the dynamic programming approach, we obtain Bellman's optimality principle and the dynamic programming equation (see (3)). We also prove a regularity result (local Lipschitz continuity) for the value function.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号