首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a previous paper the author has introduced a new notion of a (generalized) viscosity solution for Hamilton-Jacobi equations with an unbounded nonlinear term. It is proved here that the minimal time function (resp. the optimal value function) for time optimal control problems (resp. optimal control problems) governed by evolution equations is a (generalized) viscosity solution for the Bellman equation (resp. the dynamic programming equation). It is also proved that the Neumann problem in convex domains may be viewed as a Hamilton-Jacobi equation with a suitable unbounded nonlinear term.  相似文献   

2.
《Optimization》2012,61(3):347-363
In the article, minimax optimal control problems governed by parabolic equations are considered. We apply a new dual dynamic programming approach to derive sufficient optimality conditions for such problems. The idea is to move all the notions from a state space to a dual space and to obtain a new verification theorem providing the conditions, which should be satisfied by a solution of the dual partial differential equation of dynamic programming. We also give sufficient optimality conditions for the existence of an optimal dual feedback control and some approximation of the problem considered, which seems to be very useful from a practical point of view.  相似文献   

3.
We study an infinite horizon optimal control problem for a system with two state variables. One of them has the evolution governed by a controlled ordinary differential equation and the other one is related to the latter by a hysteresis relation, represented here by either a play operator or a Prandtl-Ishlinskii operator. By dynamic programming, we derive the corresponding (discontinuous) first order Hamilton-Jacobi equation, which in the first case is of finite dimension and in the second case is of infinite dimension. In both cases we prove that the value function is the only bounded uniformly continuous viscosity solution of the equation.  相似文献   

4.
慕小武  刘海军 《数学季刊》2006,21(2):185-195
This paper proposes a optimal control problem for a general nonlinear systems with finitely many admissible control settings and with costs assigned to switching of controls. With dynamic programming and viscosity solution theory we show that the switching lower-value function is a viscosity solution of the appropriate systems of quasi-variational inequalities(the appropriate generalization of the Hamilton-Jacobi equation in this context) and that the minimal such switching-storage function is equal to the continuous switching lower-value for the game. With the lower value function a optimal switching control is designed for minimizing the cost of running the systems.  相似文献   

5.
The dynamic programming formulation of the forward principle of optimality in the solution of optimal control problems results in a partial differential equation with initial boundary condition whose solution is independent of terminal cost and terminal constraints. Based on this property, two computational algorithms are described. The first-order algorithm with minimum computer storage requirements uses only integration of a system of differential equations with specified initial conditions and numerical minimization in finite-dimensional space. The second-order algorithm is based on the differential dynamic programming approach. Either of the two algorithms may be used for problems with nondifferentiable terminal cost or terminal constraints, and the solution of problems with complicated terminal conditions (e.g., with free terminal time) is greatly simplified.  相似文献   

6.
A. Leito 《PAMM》2002,1(1):95-96
We consider optimal control problems of infinite horizon type, whose control laws are given by L1loc‐functions and whose objective function has the meaning of a discounted utility. Our main objective is the verification of the fact that the value function is a viscosity solution of the Hamilton‐Jacobi‐Bellman (HJB) equation in this framework. The usual final condition for the HJB‐equation in the finite horizon case (V (T, x) = 0 or V (T, x) = g(x)) has to be substituted by a decay condition at the infinity. Following the dynamic programming approach, we obtain Bellman's optimality principle and the dynamic programming equation (see (3)). We also prove a regularity result (local Lipschitz continuity) for the value function.  相似文献   

7.
In the paper, we consider nonlinear optimal control problems with the Bolza functional and with fixed terminal time. We suggest a construction of optimal grid synthesis. For each initial state of the control system, we obtain an estimate for the difference between the optimal result and the value of the functional on the trajectory generated by the suggested grid positional control. The considered feedback control constructions and the estimates of their efficiency are based on a backward dynamic programming procedure. We also use necessary and sufficient optimality conditions in terms of characteristics of the Bellman equation and the sub-differential of the minimax viscosity solution of this equation in the Cauchy problem specified for the fixed terminal time. The results are illustrated by the numerical solution of a nonlinear optimal control problem.  相似文献   

8.
Value functions for convex optimal control problems on infinite time intervals are studied in the framework of duality. Hamilton-Jacobi characterizations and the conjugacy of primal and dual value functions are of main interest. Close ties between the uniqueness of convex solutions to a Hamilton-Jacobi equation, the uniqueness of such solutions to a dual Hamilton-Jacobi equation, and the conjugacy of primal and dual value functions are displayed. Simultaneous approximation of primal and dual infinite horizon problems with a pair of dual problems on finite horizon, for which the value functions are conjugate, leads to sufficient conditions on the conjugacy of the infinite time horizon value functions. Consequently, uniqueness results for the Hamilton-Jacobi equation are established. Little regularity is assumed on the cost functions in the control problems, correspondingly, the Hamiltonians need not display any strict convexity and may have several saddle points.

  相似文献   


9.
《Optimization》2012,61(2):227-240
In this article, the idea of a dual dynamic programming is applied to the optimal control problems with multiple integrals governed by a semi-linear elliptic PDE and mixed state-control constraints. The main result called a verification theorem provides the new sufficient conditions for optimality in terms of a solution to the dual equation of a multidimensional dynamic programming. The optimality conditions are also obtained by using the concept of an optimal dual feedback control. Besides seeking the exact minimizers of problems considered some kind of an approximation is given and the sufficient conditions for an approximated optimal pair are derived.  相似文献   

10.
In this paper, we investigate the regularizing effect of a non-local operator on first-order Hamilton-Jacobi equations. We prove that there exists a unique solution that is C2 in space and C1 in time. In order to do so, we combine viscosity solution techniques and Green's function techniques. Viscosity solution theory provides the existence of a W1,∞ solution as well as uniqueness and stability results. A Duhamel's integral representation of the equation involving the Green's function permits to prove further regularity. We also state the existence of C solutions (in space and time) under suitable assumptions on the Hamiltonian. We finally give an error estimate in L norm between the viscosity solution of the pure Hamilton-Jacobi equation and the solution of the integro-differential equation with a vanishing non-local part.  相似文献   

11.
We study the Hamilton-Jacobi equation for undiscounted exit time control problems with general nonnegative Lagrangians using the dynamic programming approach. We prove theorems characterizing the value function as the unique bounded-from-below viscosity solution of the Hamilton-Jacobi equation that is null on the target. The result applies to problems with the property that all trajectories satisfying a certain integral condition must stay in a bounded set. We allow problems for which the Lagrangian is not uniformly bounded below by positive constants, in which the hypotheses of the known uniqueness results for Hamilton-Jacobi equations are not satisfied. We apply our theorems to eikonal equations from geometric optics, shape-from-shading equations from image processing, and variants of the Fuller Problem.  相似文献   

12.
This paper presents a nonlinear, multi-phase and stochastic dynamical system according to engineering background. We show that the stochastic dynamical system exists a unique solution for every initial state. A stochastic optimal control model is constructed and the sufficient and necessary conditions for optimality are proved via dynamic programming principle. This model can be converted into a parametric nonlinear stochastic programming by integrating the state equation. It is discussed here that the local optimal solution depends in a continuous way on the parameters. A revised Hooke–Jeeves algorithm based on this property has been developed. Computer simulation is used for this paper, and the numerical results illustrate the validity and efficiency of the algorithm.  相似文献   

13.
A finite collection of piecewise-deterministic processes are controlled in order to minimize the expected value of a performance functional with continuous operating cost and discrete switching control costs. The solution of the associated dynamic programming equation is obtained by an iterative approximation using optimal stopping time problems.This research was supported in part by NSF Grant No. DMS-8508651 and by University of Tennessee Science Alliance Research Incentive Award.  相似文献   

14.
《Optimization》2012,61(3):521-537
Abstract

Strong second-order conditions in mathematical programming play an important role not only as optimality tests but also as an intrinsic feature in stability and convergence theory of related numerical methods. Besides of appropriate firstorder regularity conditions, the crucial point consists in local growth estimation for the objective which yields inverse stability information on the solution. In optimal control, similar results are known in case of continuous control functions, and for bang–bang optimal controls when the state system is linear. The paper provides a generalization of the latter result to bang–bang optimal control problems for systems which are affine-linear w.r.t. the control but depend nonlinearly on the state. Local quadratic growth in terms of L1 norms of the control variation are obtained under appropriate structural and second-order sufficient optimality conditions.  相似文献   

15.
We study an optimal control problem for a hybrid system exhibiting several internal switching variables whose discrete evolutions are governed by some delayed thermostatic laws. By the dynamic programming technique we prove that the value function is the unique viscosity solution of a system of several Hamilton-Jacobi equations, suitably coupled. The method involves a contraction principle and some suitably adapted results for exit-time problems with discontinuous exit cost.  相似文献   

16.
In this paper we derive a necessary optimality condition for a local optimal solution of some control problems. These optimal control problems are governed by a semi-linear Vettsel boundary value problem of a linear elliptic equation. The control is applied to the state equation via the boundary and a functional of the control together with the solution of the state equation under such a control will be minimized. A constraint on the solution of the state equation is also considered.  相似文献   

17.
We consider the general continuous time finite-dimensional deterministic system under a finite horizon cost functional. Our aim is to calculate approximate solutions to the optimal feedback control. First we apply the dynamic programming principle to obtain the evolutive Hamilton–Jacobi–Bellman (HJB) equation satisfied by the value function of the optimal control problem. We then propose two schemes to solve the equation numerically. One is in terms of the time difference approximation and the other the time-space approximation. For each scheme, we prove that (a) the algorithm is convergent, that is, the solution of the discrete scheme converges to the viscosity solution of the HJB equation, and (b) the optimal control of the discrete system determined by the corresponding dynamic programming is a minimizing sequence of the optimal feedback control of the continuous counterpart. An example is presented for the time-space algorithm; the results illustrate that the scheme is effective.  相似文献   

18.
We investigate a control problem for the heat equation. The goal is to find an optimal heat transfer coefficient in the dynamic boundary condition such that a desired temperature distribution at the boundary is adhered. To this end we consider a function space setting in which the heat flux across the boundary is forced to be an L p function with respect to the surface measure, which in turn implies higher regularity for the time derivative of temperature. We show that the corresponding elliptic operator generates a strongly continuous semigroup of contractions and apply the concept of maximal parabolic regularity. This allows to show the existence of an optimal control and the derivation of necessary and sufficient optimality conditions.  相似文献   

19.
This paper treats a finite time horizon optimal control problem in which the controlled state dynamics are governed by a general system of stochastic functional differential equations with a bounded memory. An infinite dimensional Hamilton–Jacobi–Bellman (HJB) equation is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJB equation.  相似文献   

20.
In this paper, the value function for an optimal control problem with endpoint and state constraints is characterized as the unique lower semicontinuous generalized solution of the Hamilton-Jacobi equation. This is achieved under a constraint qualification (CQ) concerning the interaction of the state and dynamic constraints. The novelty of the results reported here is partly the nature of (CQ) and partly the proof techniques employed, which are based on new estimates of the distance of the set of state trajectories satisfying a state constraint from a given trajectory which violates the constraint.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号