首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
研究一类半空间上带泊松跳的反射扩散过程的随机最优控制问题。得到关于这一控制问题的非线性Nisio半群,和联系这一半群的带Neumann边界条件的哈密顿。雅可比。贝尔曼方程。讨论这一类方程的粘性解的存在唯一性等问题。证明该控制问题中的价值函数是这一方程的一个粘性解。  相似文献   

2.
We consider general optimal stochastic control problems and the associated Hamilton–Jacobi–Bellman equations. We develop a general notion of week solutions – called viscosity solutions – of the amilton–Jocobi–Bellman equations that is stable and we show that the optimal cost functions of the control problems are always solutions in that sense of the Hamilton–Jacobi–Bellman equations. We then prove general uniqueness results for viscosity solutions of the Hamilton–Jacobi–Bellman equations.  相似文献   

3.
We study the existence of optimal solutions for a class of infinite horizon nonconvex autonomous discrete-time optimal control problems. This class contains optimal control problems without discounting arising in economic dynamics which describe a model with a nonconcave utility function.  相似文献   

4.
This paper deals with the optimal control of space—time statistical behavior of turbulent fields. We provide a unified treatment of optimal control problems for the deterministic and stochastic Navier—Stokes equation with linear and nonlinear constitutive relations. Tonelli type ordinary controls as well as Young type chattering controls are analyzed. For the deterministic case with monotone viscosity we use the Minty—Browder technique to prove the existence of optimal controls. For the stochastic case with monotone viscosity, we combine the Minty—Browder technique with the martingale problem formulation of Stroock and Varadhan to establish existence of optimal controls. The deterministic models given in this paper also cover some simple eddy viscosity type turbulence closure models. Accepted 7 June 1999  相似文献   

5.
We develop a viscosity solution theory for a system of nonlinear degenerate parabolic integro-partial differential equations (IPDEs) related to stochastic optimal switching and control problems or stochastic games. In the case of stochastic optimal switching and control, we prove via dynamic programming methods that the value function is a viscosity solution of the IPDEs. In our setting the value functions or the solutions of the IPDEs are not smooth, so classical verification theorems do not apply.  相似文献   

6.
We consider a class of age-structured control problems with nonlocal dynamics and boundary conditions. For these problems we suggest Arrow-type sufficient conditions for optimality of problems defined on finite as well as infinite time intervals. We examine some models as illustrations (optimal education and optimal offence control problems).  相似文献   

7.
We study a class of infinite horizon and exit-time control problems for nonlinear systems with unbounded data using the dynamic programming approach. We prove local optimality principles for viscosity super- and subsolutions of degenerate Hamilton–Jacobi equations in a very general setting. We apply these results to characterize the (possibly multiple) discontinuous solutions of Dirichlet and free boundary value problems as suitable value functions for the above-mentioned control problems.  相似文献   

8.
We study a class of infinite horizon and exit-time control problems for nonlinear systems with unbounded data using the dynamic programming approach. We prove local optimality principles for viscosity super- and subsolutions of degenerate Hamilton–Jacobi equations in a very general setting. We apply these results to characterize the (possibly multiple) discontinuous solutions of Dirichlet and free boundary value problems as suitable value functions for the above-mentioned control problems.  相似文献   

9.
We consider a family of parametric linear-quadratic optimal control problems with terminal and control constraints. This family has the specific feature that the class of optimal controls is changed for an arbitrarily small change in the parameter. In the perturbed problem, the behavior of the corresponding trajectory on noncritical arcs of the optimal control is described by solutions of singularly perturbed boundary value problems. For the solutions of these boundary value problems, we obtain an asymptotic expansion in powers of the small parameter ?. The asymptotic formula starts from a term of the order of 1/? and contains boundary layers. This formula is used to justify the asymptotic expansion of the optimal control for a perturbed problem in the family. We suggest a simple method for constructing approximate solutions of the perturbed optimal control problems without integrating singularly perturbed systems. The results of a numerical experiment are presented.  相似文献   

10.
We define a new class of optimal control problems and show that this class is the largest one of control problems where every admissible process that satisfies the Extended Pontryaguin Maximum Principle is an optimal solution of nonregular optimal control problems. In this class of problems the local and global minimum coincide. A dual problem is also proposed, which may be seen as a generalization of the Mond–Weir-type dual problem, and it is shown that the 2-invexity notion is a necessary and su?cient condition to establish weak, strong, and converse duality results between a nonregular optimal control problem and its dual problem. We also present an example to illustrate our results.  相似文献   

11.
12.
We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.  相似文献   

13.
In the paper, we consider nonlinear optimal control problems with the Bolza functional and with fixed terminal time. We suggest a construction of optimal grid synthesis. For each initial state of the control system, we obtain an estimate for the difference between the optimal result and the value of the functional on the trajectory generated by the suggested grid positional control. The considered feedback control constructions and the estimates of their efficiency are based on a backward dynamic programming procedure. We also use necessary and sufficient optimality conditions in terms of characteristics of the Bellman equation and the sub-differential of the minimax viscosity solution of this equation in the Cauchy problem specified for the fixed terminal time. The results are illustrated by the numerical solution of a nonlinear optimal control problem.  相似文献   

14.
In this paper, we consider a class of optimal control problems in which the dynamical system involves a finite number of switching times together with a state jump at each of these switching times. The locations of these switching times and a parameter vector representing the state jumps are taken as decision variables. We show that this class of optimal control problems is equivalent to a special class of optimal parameter selection problems. Gradient formulas for the cost functional and the constraint functional are derived. On this basis, a computational algorithm is proposed. For illustration, a numerical example is included.  相似文献   

15.
We consider control problems with a general cost functional where the state equations are the stationary, incompressible Navier-Stokes equations with shear-dependent viscosity. The equations are quasi-linear. The control function is given as the inhomogeneity of the momentum equation. In this paper, we study a general class of viscosity functions which correspond to shear-thinning or shear-thickening behavior. The basic results concerning existence, uniqueness, boundedness, and regularity of the solutions of the state equations are reviewed. The main topic of the paper is the proof of Gâteaux differentiability, which extends known results. It is shown that the derivative is the unique solution to a linearized equation. Moreover, necessary first-order optimality conditions are stated, and the existence of a solution of a class of control problems is shown.  相似文献   

16.
In a previous paper the author has introduced a new notion of a (generalized) viscosity solution for Hamilton-Jacobi equations with an unbounded nonlinear term. It is proved here that the minimal time function (resp. the optimal value function) for time optimal control problems (resp. optimal control problems) governed by evolution equations is a (generalized) viscosity solution for the Bellman equation (resp. the dynamic programming equation). It is also proved that the Neumann problem in convex domains may be viewed as a Hamilton-Jacobi equation with a suitable unbounded nonlinear term.  相似文献   

17.
This paper is concerned with the problems of optimal switching for general stochastic processes. We show the existence of the maximal element of a class of dynamic programming inequalities by the method of impulsive control. We obtain results on the existence of optimal control for general and cyclic switching problems.  相似文献   

18.
半线性椭圆方程支配系统的最优性条件   总被引:2,自引:0,他引:2  
高夯 《数学学报》2001,44(2):319-332
本文讨论了可能具有多值解的椭圆型偏微分方程支配系统的最优控制问题,我们通过构造一个抛物方程控制问题的逼近序列,并利用抛物方程控制问题的结果,得到了椭圆系统最优控制的必要条件.  相似文献   

19.
We show that the value function of a singular stochastic control problem is equal to the integral of the value function of an associated optimal stopping problem. The connection is proved for a general class of diffusions using the method of viscosity solutions.  相似文献   

20.
We show that the value function of a singular stochastic control problem is equal to the integral of the value function of an associated optimal stopping problem. The connection is proved for a general class of diffusions using the method of viscosity solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号