首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study a kind of partial information non-zero sum differential games of mean-field backward doubly stochastic differential equations, in which the coefficient contains not only the state process but also its marginal distribution, and the cost functional is also of mean-field type. It is required that the control is adapted to a sub-filtration of the filtration generated by the underlying Brownian motions. We establish a necessary condition in the form of maximum principle and a verification theorem, which is a sufficient condition for Nash equilibrium point. We use the theoretical results to deal with a partial information linear-quadratic (LQ) game, and obtain the unique Nash equilibrium point for our LQ game problem by virtue of the unique solvability of mean-field forward-backward doubly stochastic differential equation.  相似文献   

2.
ABSTRACT

Our purpose of this paper is to study stochastic control problems for systems driven by mean-field stochastic differential equations with elephant memory, in the sense that the system (like the elephants) never forgets its history. We study both the finite horizon case and the infinite time horizon case.
  • In the finite horizon case, results about existence and uniqueness of solutions of such a system are given. Moreover, we prove sufficient as well as necessary stochastic maximum principles for the optimal control of such systems. We apply our results to solve a mean-field linear quadratic control problem.

  • For infinite horizon, we derive sufficient and necessary maximum principles.

    As an illustration, we solve an optimal consumption problem from a cash flow modelled by an elephant memory mean-field system.

  相似文献   

3.
Abstract

A procedure is explained for deriving stochastic partial differential equations from basic principles. A discrete stochastic model is first constructed. Then, a stochastic differential equation system is derived, which leads to a certain stochastic partial differential equation. To illustrate the procedure, a representative problem is first studied in detail. Exact solutions, available for the representative problem, show that the resulting stochastic partial differential equation is accurate. Next, stochastic partial differential equations are derived for a one-dimensional vibrating string, for energy-dependent neutron transport, and for cotton-fiber breakage. Several computational comparisons are made.  相似文献   

4.
Mathematical mean-field approaches have been used in many fields, not only in Physics and Chemistry, but also recently in Finance, Economics, and Game Theory. In this paper we will study a new special mean-field problem in a purely probabilistic method, to characterize its limit which is the solution of mean-field backward stochastic differential equations (BSDEs) with reflections. On the other hand, we will prove that this type of reflected mean-field BSDEs can also be obtained as the limit equation of the mean-field BSDEs by penalization method. Finally, we give the probabilistic interpretation of the nonlinear and nonlocal partial differential equations with the obstacles by the solutions of reflected mean-field BSDEs.  相似文献   

5.
We study optimal control for mean-field stochastic partial differential equations (stochastic evolution equations) driven by a Brownian motion and an independent Poisson random measure, in case of partial information control. One important novelty of our problem is represented by the introduction of general mean-field operators, acting on both the controlled state process and the control process. We first formulate a sufficient and a necessary maximum principle for this type of control. We then prove the existence and uniqueness of the solution of such general forward and backward mean-field stochastic partial differential equations. We apply our results to find the explicit optimal control for an optimal harvesting problem.  相似文献   

6.
In this paper, we study mean-field backward stochastic differential equations driven by G-Brownian motion (G-BSDEs). We first obtain the existence and uniqueness theorem of these equations. In fact, we can obtain local solutions by constructing Picard contraction mapping for Y term on small interval, and the global solution can be obtained through backward iteration of local solutions. Then, a comparison theorem for this type of mean-field G-BSDE is derived. Furthermore, we establish the connection of this mean-field G-BSDE and a nonlocal partial differential equation. Finally, we give an application of mean-field G-BSDE in stochastic differential utility model.  相似文献   

7.
In this paper,we consider an optimal control problem with state constraints,where the control system is described by a mean-field forward-backward stochastic differential equation(MFFBSDE,for short)and the admissible control is mean-field type.Making full use of the backward stochastic differential equation theory,we transform the original control system into an equivalent backward form,i.e.,the equations in the control system are all backward.In addition,Ekeland’s variational principle helps us deal with the state constraints so that we get a stochastic maximum principle which characterizes the necessary condition of the optimal control.We also study a stochastic linear quadratic control problem with state constraints.  相似文献   

8.
This paper establishes a necessary and sufficient stochastic maximum principle for a mean-field model with randomness described by Brownian motions and Poisson jumps. We also prove the existence and uniqueness of the solution to a jump-diffusion mean-field backward stochastic differential equation. A new version of the sufficient stochastic maximum principle, which only requires the terminal cost is convex in an expected sense, is applied to solve a bicriteria mean–variance portfolio selection problem.  相似文献   

9.
This paper considers a stochastic control problem in which the dynamic system is a controlled backward stochastic heat equation with Neumann boundary control and boundary noise and the state must coincide with a given random vector at terminal time. Through defining a proper form of the mild solution for the state equation, the existence and uniqueness of the mild solution is given. As a main result, a global maximum principle for our control problem is presented. The main result is also applied to a backward linear-quadratic control problem in which an optimal control is obtained explicitly as a feedback of the solution to a forward–backward stochastic partial differential equation.  相似文献   

10.
The present paper considers an optimal control problem for fully coupled forward–backward stochastic differential equations (FBSDEs) of mean-field type in the case of controlled diffusion coefficient. Moreover, the control domain is not assumed to be convex. By virtue of a reduction method, we establish the necessary optimality conditions of Pontryagin's type. As an application, a linear–quadratic stochastic control problem is studied.  相似文献   

11.
Using the decomposition of solution of SDE, we consider the stochastic optimal control problem with anticipative controls as a family of deterministic control problems parametrized by the paths of the driving Wiener process and of a newly introduced Lagrange multiplier stochastic process (nonanticipativity equality constraint). It is shown that the value function of these problems is the unique global solution of a robust equation (random partial differential equation) associated to a linear backward Hamilton-Jacobi-Bellman stochastic partial differential equation (HJB SPDE). This appears as limiting SPDE for a sequence of random HJB PDE's when linear interpolation approximation of the Wiener process is used. Our approach extends the Wong-Zakai type results [20] from SDE to the stochastic dynamic programming equation by showing how this arises as average of the limit of a sequence of deterministic dynamic programming equations. The stochastic characteristics method of Kunita [13] is used to represent the value function. By choosing the Lagrange multiplier equal to its nonanticipative constraint value the usual stochastic (nonanticipative) optimal control and optimal cost are recovered. This suggests a method for solving the anticipative control problems by almost sure deterministic optimal control. We obtain a PDE for the “cost of perfect information” the difference between the cost function of the nonanticipative control problem and the cost of the anticipative problem which satisfies a nonlinear backward HJB SPDE. Poisson bracket conditions are found ensuring this has a global solution. The cost of perfect information is shown to be zero when a Lagrangian submanifold is invariant for the stochastic characteristics. The LQG problem and a nonlinear anticipative control problem are considered as examples in this framework  相似文献   

12.
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S.?Peng (SIAM J. Control Optim. 28(4):966?C979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A?comparable situation exists in an article by R.?Buckdahn, B.?Djehiche, and J.?Li (Appl. Math. Optim. 64(2):197?C216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.  相似文献   

13.
作者研究了一个条件平均场随机微分方程的最优控制问题.这种方程和某些部分信息下的随机最优控制问题有关,并且可以看做是平均场随机微分方程的推广.作者以庞特里雅金最大值原理的形式给出最优控制满足的必要和充分条件.此外,文中给出一个线性二次最优控制问题来说明理论结果的应用.  相似文献   

14.
In this paper we study ergodic backward stochastic differential equations (EBSDEs) dropping the strong dissipativity assumption needed in Fuhrman et al. (2009) [12]. In other words we do not need to require the uniform exponential decay of the difference of two solutions of the underlying forward equation, which, on the contrary, is assumed to be non-degenerate.We show the existence of solutions by the use of coupling estimates for a non-degenerate forward stochastic differential equation with bounded measurable nonlinearity. Moreover we prove the uniqueness of “Markovian” solutions by exploiting the recurrence of the same class of forward equations.Applications are then given for the optimal ergodic control of stochastic partial differential equations and to the associated ergodic Hamilton-Jacobi-Bellman equations.  相似文献   

15.
This paper is concerned with Kalman-Bucy filtering problems of a forward and backward stochastic system which is a Hamiltonian system arising from a stochastic optimal control problem. There are two main contributions worthy pointing out. One is that we obtain the Kalman-Bucy filtering equation of a forward and backward stochastic system and study a kind of stability of the aforementioned filtering equation. The other is that we develop a backward separation technique, which is different to Wonham's separation theorem, to study a partially observed recursive optimal control problem. This new technique can also cover some more general situation such as a partially observed linear quadratic non-zero sum differential game problem is solved by it. We also give a simple formula to estimate the information value which is the difference of the optimal cost functionals between the partial and the full observable information cases.  相似文献   

16.
17.
The authors discuss one type of general forward-backward stochastic differential equations (FBSDEs) with It?o’s stochastic delayed equations as the forward equations and anticipated backward stochastic differential equations as the backward equations. The existence and uniqueness results of the general FBSDEs are obtained. In the framework of the general FBSDEs in this paper, the explicit form of the optimal control for linearquadratic stochastic optimal control problem with delay and the Nash equilibrium point for nonzero sum differential games problem with delay are obtained.  相似文献   

18.
An optimal control problem for a controlled backward stochastic partial differential equation in the abstract evolution form with a Bolza type performance functional is considered. The control domain is not assumed to be convex, and all coefficients of the system are allowed to be random. A variational formula for the functional in a given control process direction is derived, by the Hamiltonian and associated adjoint system. As an application, a global stochastic maximum principle of Pontraygins type for the optimal controls is established.  相似文献   

19.
In this paper, a new class of backward doubly stochastic differential equations driven by Teugels martingales associated with a Lévy process satisfying some moment condition and an independent Brownian motion is investigated. We obtain the existence and uniqueness of solutions to these equations. A probabilistic interpretation for solutions to a class of stochastic partial differential integral equations is given.  相似文献   

20.
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号