Abstract: | We consider optimal control problems of infinite horizon type, whose control laws are given by L1loc‐functions and whose objective function has the meaning of a discounted utility. Our main objective is the verification of the fact that the value function is a viscosity solution of the Hamilton‐Jacobi‐Bellman (HJB) equation in this framework. The usual final condition for the HJB‐equation in the finite horizon case (V (T, x) = 0 or V (T, x) = g(x)) has to be substituted by a decay condition at the infinity. Following the dynamic programming approach, we obtain Bellman's optimality principle and the dynamic programming equation (see (3)). We also prove a regularity result (local Lipschitz continuity) for the value function. |