首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We consider a control problem for the stochastic heat equation with Neumann boundary condition, where controls and noise terms are defined inside the domain as well as on the boundary. The noise terms are given by independent Q-Wiener processes. Under some assumptions, we derive necessary and sufficient optimality conditions stochastic controls have to satisfy. Using these optimality conditions, we establish explicit formulas with the result that stochastic optimal controls are given by feedback controls. This is an important conclusion to ensure that the controls are adapted to a certain filtration. Therefore, the state is an adapted process as well.  相似文献   

2.
This article deals with a stochastic control problem for certain fluids of non-Newtonian type. More precisely, the state equation is given by the two-dimensional stochastic second grade fluids perturbed by a multiplicative white noise. The control acts through an external stochastic force and we search for a control that minimizes a cost functional. We show that the Gâteaux derivative of the control to state map is a stochastic process being the unique solution of the stochastic linearized state equation. The well-posedness of the corresponding stochastic backward adjoint equation is also established, allowing to derive the first order optimality condition.  相似文献   

3.
In this paper, the authors investigate the optimal conversion rate at which land use is irreversibly converted from biodiversity conservation to agricultural production. This problem is formulated as a stochastic control model, then transformed into a HJB equation involving free boundary. Since the state equation has singularity, it is difficult to directly derive the boundary value condition for the HJB equation. They provide a new method to overcome the difficulty via constructing another auxiliary stochastic control problem,and impose a proper boundary value condition. Moreover, they establish the existence and uniqueness of the viscosity solution of the HJB equation. Finally, they propose a stable numerical method for the HJB equation involving free boundary, and show some numerical results.  相似文献   

4.
M. Gugat 《Applicable analysis》2013,92(10):2200-2214
We consider an exact boundary control problem for the wave equation with given initial and terminal data and Dirichlet boundary control. The aim is to steer the state of the system that is defined on a given domain to a position of rest in finite time. The optimal control that is obtained as the solution of the problem depends on the data that define the problem, in particular on the domain. Often for the numerical solution of the control problem, this given domain is replaced by a polygon. This is the motivation to study the convergence of the optimal controls for the polygon to the optimal controls for the given domain. To study the convergence, the values of the optimal controls that are defined on the boundaries of the approximating polygons are mapped in the normal directions of the polygon to control functions defined on the boundary of the original domain. This map has already been used by Bramble and King, Deckelnick, Guenther and Hinze and by Casas and Sokolowski. Using this map, we can show the strong convergence of the transformed controls as the polygons approach the given domain. An essential tool to obtain the convergence is a regularization term in the objective functions to increase the regularity of the state.  相似文献   

5.
本文研究一类由分数布朗运动驱动的一维倒向随机微分方程解的存在性与唯一性问题,在假设其生成元满足关于y Lipschitz连续,但关于z一致连续的条件下,通过应用分数布朗运动的Tanaka公式以及拟条件期望在一定条件下满足的单调性质,得到倒向随机微分方程的解的一个不等式估计,应用Gronwall不等式得到了一个关于这类方程的解的存在性与唯一性结果,推广了一些经典结果以及生成元满足一致Lipschitz条件下的由分数布朗运动驱动的倒向随机微分方程解的结果.  相似文献   

6.
A linear elliptic control problem with pointwise state constraints is considered. These constraints are given in the domain. In contrast to this, the control acts only at the boundary. We propose a general concept using virtual control in this paper. The virtual control is introduced in objective, state equation, and constraints. Moreover, additional control constraints for the virtual control are investigated. An error estimate for the regularization error is derived as main result of the paper. The theory is illustrated by numerical tests.  相似文献   

7.
In this paper,we consider an optimal control problem with state constraints,where the control system is described by a mean-field forward-backward stochastic differential equation(MFFBSDE,for short)and the admissible control is mean-field type.Making full use of the backward stochastic differential equation theory,we transform the original control system into an equivalent backward form,i.e.,the equations in the control system are all backward.In addition,Ekeland’s variational principle helps us deal with the state constraints so that we get a stochastic maximum principle which characterizes the necessary condition of the optimal control.We also study a stochastic linear quadratic control problem with state constraints.  相似文献   

8.
A class of optimal control problems for a parabolic equation with nonlinear boundary condition and constraints on the control and the state is considered. Associated approximate problems are established, where the equation of state is defined by a semidiscrete Ritz-Galerkin method. Moreover, we are able to allow for the discretization of admissible controls. We show the convergence of the approximate controls to the solution of the exact control problem, as the discretization parameter tends toward zero. This result holds true under the assumption of a certain sufficient second-order optimality condition.Dedicated to the 60th birthday of Lothar von Wolfersdorf  相似文献   

9.
This paper is concerned with Kalman-Bucy filtering problems of a forward and backward stochastic system which is a Hamiltonian system arising from a stochastic optimal control problem. There are two main contributions worthy pointing out. One is that we obtain the Kalman-Bucy filtering equation of a forward and backward stochastic system and study a kind of stability of the aforementioned filtering equation. The other is that we develop a backward separation technique, which is different to Wonham's separation theorem, to study a partially observed recursive optimal control problem. This new technique can also cover some more general situation such as a partially observed linear quadratic non-zero sum differential game problem is solved by it. We also give a simple formula to estimate the information value which is the difference of the optimal cost functionals between the partial and the full observable information cases.  相似文献   

10.
We consider a controlled system driven by a coupled forward–backward stochastic differential equation with a non degenerate diffusion matrix. The cost functional is defined by the solution of the controlled backward stochastic differential equation, at the initial time. Our goal is to find an optimal control which minimizes the cost functional. The method consists to construct a sequence of approximating controlled systems for which we show the existence of a sequence of feedback optimal controls. By passing to the limit, we establish the existence of a relaxed optimal control to the initial problem. The existence of a strict control follows from the Filippov convexity condition.  相似文献   

11.
This paper discusses a mean–variance portfolio selection problem under a constant elasticity of variance model. A backward stochastic Riccati equation is first considered. Then we relate the solution of the associated stochastic control problem to that of the backward stochastic Riccati equation. Finally, explicit expressions of the optimal portfolio strategy, the value function and the efficient frontier of the mean–variance problem are expressed in terms of the solution of the backward stochastic Riccati equation.  相似文献   

12.
Abstract

In this article, we derive the existence and uniqueness of the solution for a class of generalized reflected backward stochastic differential equation involving the integral with respect to a continuous process, which is the local time of the diffusion on the boundary, in using the penalization method. We also give a characterization of the solution as the value function of an optimal stopping time problem. Then we give a probabilistic formula for the viscosity solution of an obstacle problem for PDEs with a nonlinear Neumann boundary condition.  相似文献   

13.
讨论线性二次最优控制问题, 其随机系统是由 L\'{e}vy 过程驱动的具有随机系数而且还具有仿射项的线性随机微分方程. 伴随方程具有无界系数, 其可解性不是显然的. 利用 $\mathscr{B}\mathscr{M}\mathscr{O}$ 鞅理论, 证明伴随方程在有限 时区解的存在唯一性. 在稳定性条件下, 无限时区的倒向随机 Riccati 微分方程和伴随倒向随机方程的解的存在性是通过对应有限 时区的方程的解来逼近的. 利用这些解能够合成最优控制.  相似文献   

14.
The present paper studies the stochastic maximum principle in singular optimal control, where the state is governed by a stochastic differential equation with nonsmooth coefficients, allowing both classical control and singular control. The proof of the main result is based on the approximation of the initial problem, by a sequence of control problems with smooth coefficients. We, then apply Ekeland's variational principle for this approximating sequence of control problems, in order to establish necessary conditions satisfied by a sequence of near optimal controls. Finally, we prove the convergence of the scheme, using Krylov's inequality in the nondegenerate case and the Bouleau-Hirsch flow property in the degenerate one. The adjoint process obtained is given by means of distributional derivatives of the coefficients.  相似文献   

15.
The existence and numerical estimation of a boundary control for then-dimensional linear diffusion equation is considered. The problem is modified into one consisting of the minimization of a linear functional over a set of Radon measures. The existence of an optimal measure corresponding to the above problem is shown, and the optimal measure is approximated by a finite convex combination of atomic measures. This construction gives rise to a finite-dimensional linear programming problem, whose solution can be used to construct the combination of atomic measures, and thus a piecewise-constant control function which approximates the action of the optimal measure, so that the final state corresponding to the above control function is close to the desired final state, and the value it assigns to the performance criterion is close to the corresponding infimum. A numerical procedure is developed for the estimation of these controls, entailing the solution of large, finite-dimensional linear programming problems. This procedure is illustrated by several examples.  相似文献   

16.
We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

17.
??We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

18.
This paper considers the numerical solution of optimal control problems based on ODEs. We assume that an explicit Runge-Kutta method is applied to integrate the state equation in the context of a recursive discretization approach. To compute the gradient of the cost function, one may employ Automatic Differentiation (AD). This paper presents the integration schemes that are automatically generated when differentiating the discretization of the state equation using AD. We show that they can be seen as discretization methods for the sensitivity and adjoint differential equation of the underlying control problem. Furthermore, we prove that the convergence rate of the scheme automatically derived for the sensitivity equation coincides with the convergence rate of the integration scheme for the state equation. Under mild additional assumptions on the coefficients of the integration scheme for the state equation, we show a similar result for the scheme automatically derived for the adjoint equation. Numerical results illustrate the presented theoretical results.  相似文献   

19.
In the present paper, we study a necessary condition under which the solutions of a stochastic differential equation governed by unbounded control processes, remain in an arbitrarily small neighborhood of a given set of constraints. We prove that, in comparison to the classical constrained control problem with bounded control processes, a further assumption on the growth of control processes is needed in order to obtain a necessary and sufficient condition in terms of viscosity solution of the associated Hamilton-Jacobi-Bellman equation. A rather general example illustrates our main result.  相似文献   

20.
对随机递归最优控制问题即代价函数由特定倒向随机微分方程解来描述和递归混合最优控制问题即控制者还需 决定最优停止时刻, 得到了最优控制的存在性结果. 在一类等价概率测度集中,还给出了递归最优值函数的最小和最大数学期望.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号