首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 842 毫秒
1.
We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.  相似文献   

2.
This paper is concerned with optimal control of neutral stochastic functional differential equations (NSFDEs). The Pontryagin maximum principle is proved for optimal control, where the adjoint equation is a linear neutral backward stochastic functional equation of Volterra type (VNBSFE). The existence and uniqueness of the solution are proved for the general nonlinear VNBSFEs. Under the convexity assumption of the Hamiltonian function, a sufficient condition for the optimality is addressed as well.  相似文献   

3.
In this paper,we consider an optimal control problem with state constraints,where the control system is described by a mean-field forward-backward stochastic differential equation(MFFBSDE,for short)and the admissible control is mean-field type.Making full use of the backward stochastic differential equation theory,we transform the original control system into an equivalent backward form,i.e.,the equations in the control system are all backward.In addition,Ekeland’s variational principle helps us deal with the state constraints so that we get a stochastic maximum principle which characterizes the necessary condition of the optimal control.We also study a stochastic linear quadratic control problem with state constraints.  相似文献   

4.
This paper is mainly concerned with the solutions to both forward and backward mean-field stochastic partial differential equation and the corresponding optimal control problem for mean-field stochastic partial differential equation. The authors first prove the continuous dependence theorems of forward and backward mean-field stochastic partial differential equations and show the existence and uniqueness of solutions to them. Then they establish necessary and sufficient optimality conditions of the control problem in the form of Pontryagin''s maximum principles. To illustrate the theoretical results, the authors apply stochastic maximum principles to study the infinite-dimensional linear-quadratic control problem of mean-field type. Further, an application to a Cauchy problem for a controlled stochastic linear PDE of mean-field type is studied.  相似文献   

5.
An optimal control problem for a controlled backward stochastic partial differential equation in the abstract evolution form with a Bolza type performance functional is considered. The control domain is not assumed to be convex, and all coefficients of the system are allowed to be random. A variational formula for the functional in a given control process direction is derived, by the Hamiltonian and associated adjoint system. As an application, a global stochastic maximum principle of Pontraygins type for the optimal controls is established.  相似文献   

6.
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S.?Peng (SIAM J. Control Optim. 28(4):966?C979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A?comparable situation exists in an article by R.?Buckdahn, B.?Djehiche, and J.?Li (Appl. Math. Optim. 64(2):197?C216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.  相似文献   

7.

In this paper, we are concerned with optimal control problems where the system is driven by a stochastic differential equation of the Ito type. We study the relaxed model for which an optimal solution exists. This is an extension of the initial control problem, where admissible controls are measure valued processes. Using Ekeland's variational principle and some stability properties of the corresponding state equation and adjoint processes, we establish necessary conditions for optimality satisfied by an optimal relaxed control. This is the first version of the stochastic maximum principle that covers relaxed controls.  相似文献   

8.
This article deals with a stochastic control problem for certain fluids of non-Newtonian type. More precisely, the state equation is given by the two-dimensional stochastic second grade fluids perturbed by a multiplicative white noise. The control acts through an external stochastic force and we search for a control that minimizes a cost functional. We show that the Gâteaux derivative of the control to state map is a stochastic process being the unique solution of the stochastic linearized state equation. The well-posedness of the corresponding stochastic backward adjoint equation is also established, allowing to derive the first order optimality condition.  相似文献   

9.
The optimal control of unsteady Burgers equation without constraints and with control constraints are solved using the high-level modelling and simulation package COMSOL Multiphysics. Using the first-order optimality conditions, projection and semi-smooth Newton methods are applied for solving the optimality system. The optimality system is solved numerically using the classical iterative approach by integrating the state equation forward in time and the adjoint equation backward in time using the gradient method and considering the optimality system in the space-time cylinder as an elliptic equation and solving it adaptively. The equivalence of the optimality system to the elliptic partial differential equation (PDE) is shown by transforming the Burgers equation by the Cole-Hopf transformation to a linear diffusion type equation. Numerical results obtained with adaptive and nonadaptive elliptic solvers of COMSOL Multiphysics are presented both for the unconstrained and the control constrained case.  相似文献   

10.
We study a kind of partial information non-zero sum differential games of mean-field backward doubly stochastic differential equations, in which the coefficient contains not only the state process but also its marginal distribution, and the cost functional is also of mean-field type. It is required that the control is adapted to a sub-filtration of the filtration generated by the underlying Brownian motions. We establish a necessary condition in the form of maximum principle and a verification theorem, which is a sufficient condition for Nash equilibrium point. We use the theoretical results to deal with a partial information linear-quadratic (LQ) game, and obtain the unique Nash equilibrium point for our LQ game problem by virtue of the unique solvability of mean-field forward-backward doubly stochastic differential equation.  相似文献   

11.
61. IntroductionLet (fi, F, P, {R}tZo) be a complete filtered probability space on which a standard onedimensional Brownian motion w(') is defined such that {R}tZo is the natural filtrationgenerated by w(.), augmented by all the p-null sets in i. We consider the following stateequationwhere T E T[0, TI, the set of all {R}tZo-stopping times taking values in [0, T], (E sigLlt (fi;IR"); A, B, C, D are matrix-valued {R}tZo-adapted bounded processes. In the above, u(.) EU[T, T]gLI(T, T…  相似文献   

12.
作者研究了一个条件平均场随机微分方程的最优控制问题.这种方程和某些部分信息下的随机最优控制问题有关,并且可以看做是平均场随机微分方程的推广.作者以庞特里雅金最大值原理的形式给出最优控制满足的必要和充分条件.此外,文中给出一个线性二次最优控制问题来说明理论结果的应用.  相似文献   

13.
We consider a stochastic control problem where the system is governed by a non linear stochastic differential equation with jumps. The control is allowed to enter into both diffusion and jump terms. By only using the first order expansion and the associated adjoint equation, we establish necessary as well as sufficient optimality conditions of controls for relaxed controls, who are a measure-valued processes.  相似文献   

14.
We present necessary conditions of optimality for an infinitehorizon optimal control problem. The transversality condition is derived with the help of stability theory and is formulated in terms of the Lyapunov exponents of solutions to the adjoint equation. A problem without an exponential factor in the integral functional is considered. Necessary and sufficient conditions of optimality are proved for linear quadratic problems with conelike control constraints.  相似文献   

15.
We present a numerical method for solving tracking-type optimal control problems subject to scalar nonlinear hyperbolic balance laws in one and two space dimensions. Our approach is based on the formal optimality system and requires numerical solutions of the hyperbolic balance law forward in time and its nonconservative adjoint equation backward in time. To this end, we develop a hybrid method, which utilizes advantages of both the Eulerian finite-volume central-upwind scheme (for solving the balance law) and the Lagrangian discrete characteristics method (for solving the adjoint transport equation). Experimental convergence rates as well as numerical results for optimization problems with both linear and nonlinear constraints and a duct design problem are presented.  相似文献   

16.
A maximum principle is developed for a class of problems involving the optimal control of a damped-parameter system governed by a linear hyperbolic equation in one space dimension that is not necessarily separable. A convex index of performance is formulated, which consists of functionals of the state variable, its first- and second-order space derivatives, its first-order time derivative, and a penalty functional involving the open-loop control force. The solution of the optimal control problem is shown to be unique. The adjoint operator is determined, and a maximum principle relating the control function to the adjoint variable is stated. The proof of the maximum principle is given with the help of convexity arguments. The maximum principle can be used to compute the optimal control function and is particularly suitable for problems involving the active control of structural elements for vibration suppression.  相似文献   

17.
本文研究带跳的倒向重随机系统的随机控制问题的最优性条件。在控制域为凸且控制变量进入所有系数条件下,分别以局部形式和全局形式给出必要性最优条件和充分性最优条件。把上述最大值原理应用于重随机线性二次最优控制问题,得到唯一的最优控制,并且给出应用的例子。  相似文献   

18.
This paper establishes a necessary and sufficient stochastic maximum principle for a mean-field model with randomness described by Brownian motions and Poisson jumps. We also prove the existence and uniqueness of the solution to a jump-diffusion mean-field backward stochastic differential equation. A new version of the sufficient stochastic maximum principle, which only requires the terminal cost is convex in an expected sense, is applied to solve a bicriteria mean–variance portfolio selection problem.  相似文献   

19.
It is well-known in optimal control theory that the maximum principle, in general, furnishes only necessary optimality conditions for an admissible process to be an optimal one. It is also well-known that if a process satisfies the maximum principle in a problem with convex data, the maximum principle turns to be likewise a sufficient condition. Here an invexity type condition for state constrained optimal control problems is defined and shown to be a sufficient optimality condition. Further, it is demonstrated that all optimal control problems where all extremal processes are optimal necessarily obey this invexity condition. Thus optimal control problems which satisfy such a condition constitute the most general class of problems where the maximum principle becomes automatically a set of sufficient optimality conditions.  相似文献   

20.
讨论线性二次最优控制问题, 其随机系统是由 L\'{e}vy 过程驱动的具有随机系数而且还具有仿射项的线性随机微分方程. 伴随方程具有无界系数, 其可解性不是显然的. 利用 $\mathscr{B}\mathscr{M}\mathscr{O}$ 鞅理论, 证明伴随方程在有限 时区解的存在唯一性. 在稳定性条件下, 无限时区的倒向随机 Riccati 微分方程和伴随倒向随机方程的解的存在性是通过对应有限 时区的方程的解来逼近的. 利用这些解能够合成最优控制.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号