共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed
to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field
type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold.
Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions
for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the
classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear
mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem. 相似文献
2.
This paper is mainly concerned with the solutions to both forward
and backward mean-field stochastic partial differential equation and
the corresponding optimal control problem for mean-field stochastic
partial differential equation. The authors first prove the
continuous dependence theorems of forward and backward mean-field
stochastic partial differential equations and show the existence
and uniqueness of solutions to them. Then they establish necessary
and sufficient optimality conditions of the control problem in the
form of Pontryagin''s maximum principles. To illustrate the
theoretical results, the authors apply stochastic maximum principles
to study the infinite-dimensional linear-quadratic control problem
of mean-field type. Further, an application to a Cauchy problem for
a controlled stochastic linear PDE of mean-field type is studied. 相似文献
3.
This paper considers a stochastic control problem in which the dynamic system is a controlled backward stochastic heat equation with Neumann boundary control and boundary noise and the state must coincide with a given random vector at terminal time. Through defining a proper form of the mild solution for the state equation, the existence and uniqueness of the mild solution is given. As a main result, a global maximum principle for our control problem is presented. The main result is also applied to a backward linear-quadratic control problem in which an optimal control is obtained explicitly as a feedback of the solution to a forward–backward stochastic partial differential equation. 相似文献
4.
We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend
on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field
type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For
a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in
the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order
adjoint equation remains the same as in Peng’s stochastic maximum principle. 相似文献
5.
Roxana Dumitrescu Bernt Øksendal Agnès Sulem 《Journal of Optimization Theory and Applications》2018,176(3):559-584
We study optimal control for mean-field stochastic partial differential equations (stochastic evolution equations) driven by a Brownian motion and an independent Poisson random measure, in case of partial information control. One important novelty of our problem is represented by the introduction of general mean-field operators, acting on both the controlled state process and the control process. We first formulate a sufficient and a necessary maximum principle for this type of control. We then prove the existence and uniqueness of the solution of such general forward and backward mean-field stochastic partial differential equations. We apply our results to find the explicit optimal control for an optimal harvesting problem. 相似文献
6.
吴霜 《数学年刊A辑(中文版)》2021,42(1):75-88
作者研究了一个条件平均场随机微分方程的最优控制问题.这种方程和某些部分信息下的随机最优控制问题有关,并且可以看做是平均场随机微分方程的推广.作者以庞特里雅金最大值原理的形式给出最优控制满足的必要和充分条件.此外,文中给出一个线性二次最优控制问题来说明理论结果的应用. 相似文献
7.
本文研究了带Poisson 跳跃的正倒向随机延迟系统的递归最优控制问题. 利用经典的针状变分方法、对偶技术和带Poisson 跳跃的超前倒向随机微分方程的相关结果, 证明了最优控制的最大值原理, 包括了最优控制满足的必要条件和充分条件. 相似文献
8.
John Joseph Absalom Hosking 《Applied Mathematics and Optimization》2012,66(3):415-454
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S.?Peng (SIAM J. Control Optim. 28(4):966?C979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A?comparable situation exists in an article by R.?Buckdahn, B.?Djehiche, and J.?Li (Appl. Math. Optim. 64(2):197?C216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term. 相似文献
9.
Qingfeng ZHU Lijiao SU Fuguo LIU Yufeng SHI Yong ao SHEN Shuyang WANG 《Frontiers of Mathematics in China》2020,15(6):1307
We study a kind of partial information non-zero sum differential games of mean-field backward doubly stochastic differential equations, in which the coefficient contains not only the state process but also its marginal distribution, and the cost functional is also of mean-field type. It is required that the control is adapted to a sub-filtration of the filtration generated by the underlying Brownian motions. We establish a necessary condition in the form of maximum principle and a verification theorem, which is a sufficient condition for Nash equilibrium point. We use the theoretical results to deal with a partial information linear-quadratic (LQ) game, and obtain the unique Nash equilibrium point for our LQ game problem by virtue of the unique solvability of mean-field forward-backward doubly stochastic differential equation. 相似文献
10.
This paper establishes a necessary and sufficient stochastic maximum principle for a mean-field model with randomness described by Brownian motions and Poisson jumps. We also prove the existence and uniqueness of the solution to a jump-diffusion mean-field backward stochastic differential equation. A new version of the sufficient stochastic maximum principle, which only requires the terminal cost is convex in an expected sense, is applied to solve a bicriteria mean–variance portfolio selection problem. 相似文献
11.
We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way. 相似文献
12.
We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation. 相似文献
13.
??We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation. 相似文献
14.
Qingxin Meng 《随机分析与应用》2013,31(1):88-109
In this article, we consider a linear-quadratic optimal control problem (LQ problem) for a controlled linear stochastic differential equation driven by a multidimensional Browinan motion and a Poisson random martingale measure in the general case, where the coefficients are allowed to be predictable processes or random matrices. By the duality technique, the dual characterization of the optimal control is derived by the optimality system (so-called stochastic Hamilton system), which turns out to be a linear fully coupled forward-backward stochastic differential equation with jumps. Using a decoupling technique, the connection between the stochastic Hamilton system and the associated Riccati equation is established. As a result, the state feedback representation is obtained for the optimal control. As the coefficients for the LQ problem are random, here, the associated Riccati equation is a highly nonlinear backward stochastic differential equation (BSDE) with jumps, where the generator depends on the unknown variables K, L, and H in a quadratic way (see (5.9) herein). For the case where the generator is bounded and is linearly dependent on the unknown martingale terms L and H, the existence and uniqueness of the solution for the associated Riccati equation are established by Bellman's principle of quasi-linearization. 相似文献
15.
Olivier Menoukeu Pamen 《Journal of Optimization Theory and Applications》2017,175(2):373-410
This paper presents three versions of maximum principle for a stochastic optimal control problem of Markov regime-switching forward–backward stochastic differential equations with jumps. First, a general sufficient maximum principle for optimal control for a system, driven by a Markov regime-switching forward–backward jump–diffusion model, is developed. In the regime-switching case, it might happen that the associated Hamiltonian is not concave and hence the classical maximum principle cannot be applied. Hence, an equivalent type maximum principle is introduced and proved. In view of solving an optimal control problem when the Hamiltonian is not concave, we use a third approach based on Malliavin calculus to derive a general stochastic maximum principle. This approach also enables us to derive an explicit solution of a control problem when the concavity assumption is not satisfied. In addition, the framework we propose allows us to apply our results to solve a recursive utility maximization problem. 相似文献
16.
胡世培 《数学的实践与认识》2017,(12):249-255
讨论由Brownian运动和Lévy过程共同驱动的线性随机系统的随机LQ问题,其中代价泛函是关于Lévy过程生成的σ-代数取条件期望.得到由Lévy过程驱动的新的多维的倒向随机Riccati方程,利用Bellman拟线性原理和单调收敛方法证明了此随机Riccati方程的解的存在性. 相似文献
17.
Olivier Menoukeu-Pamen Romuald Hervé Momeya 《Mathematical Methods of Operations Research》2017,85(3):349-388
In this paper, we present an optimal control problem for stochastic differential games under Markov regime-switching forward–backward stochastic differential equations with jumps. First, we prove a sufficient maximum principle for nonzero-sum stochastic differential games problems and obtain equilibrium point for such games. Second, we prove an equivalent maximum principle for nonzero-sum stochastic differential games. The zero-sum stochastic differential games equivalent maximum principle is then obtained as a corollary. We apply the obtained results to study a problem of robust utility maximization under a relative entropy penalty and to find optimal investment of an insurance firm under model uncertainty. 相似文献
18.
Brahim Mezerdi Seid Bahlali 《Stochastics An International Journal of Probability and Stochastic Processes》2013,85(3-4):201-218
In this paper, we are concerned with optimal control problems where the system is driven by a stochastic differential equation of the Ito type. We study the relaxed model for which an optimal solution exists. This is an extension of the initial control problem, where admissible controls are measure valued processes. Using Ekeland's variational principle and some stability properties of the corresponding state equation and adjoint processes, we establish necessary conditions for optimality satisfied by an optimal relaxed control. This is the first version of the stochastic maximum principle that covers relaxed controls. 相似文献
19.
This work is concerned with numerical schemes for stochastic optimal control problems (SOCPs) by means of forward backward stochastic differential equations (FBSDEs). We first convert the stochastic optimal control problem into an equivalent stochastic optimality system of FBSDEs. Then we design an efficient second order FBSDE solver and an quasi-Newton type optimization solver for the resulting system. It is noticed that our approach admits the second order rate of convergence even when the state equation is approximated by the Euler scheme. Several numerical examples are presented to illustrate the effectiveness and the accuracy of the proposed numerical schemes. 相似文献
20.
K. Bahlali O. Kebiri B. Mezerdi 《Stochastics An International Journal of Probability and Stochastic Processes》2018,90(6):861-875
We consider a controlled system driven by a coupled forward–backward stochastic differential equation with a non degenerate diffusion matrix. The cost functional is defined by the solution of the controlled backward stochastic differential equation, at the initial time. Our goal is to find an optimal control which minimizes the cost functional. The method consists to construct a sequence of approximating controlled systems for which we show the existence of a sequence of feedback optimal controls. By passing to the limit, we establish the existence of a relaxed optimal control to the initial problem. The existence of a strict control follows from the Filippov convexity condition. 相似文献