首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
We consider a control problem for the stochastic heat equation with Neumann boundary condition, where controls and noise terms are defined inside the domain as well as on the boundary. The noise terms are given by independent Q-Wiener processes. Under some assumptions, we derive necessary and sufficient optimality conditions stochastic controls have to satisfy. Using these optimality conditions, we establish explicit formulas with the result that stochastic optimal controls are given by feedback controls. This is an important conclusion to ensure that the controls are adapted to a certain filtration. Therefore, the state is an adapted process as well.  相似文献   

2.
Using the decomposition of solution of SDE, we consider the stochastic optimal control problem with anticipative controls as a family of deterministic control problems parametrized by the paths of the driving Wiener process and of a newly introduced Lagrange multiplier stochastic process (nonanticipativity equality constraint). It is shown that the value function of these problems is the unique global solution of a robust equation (random partial differential equation) associated to a linear backward Hamilton-Jacobi-Bellman stochastic partial differential equation (HJB SPDE). This appears as limiting SPDE for a sequence of random HJB PDE's when linear interpolation approximation of the Wiener process is used. Our approach extends the Wong-Zakai type results [20] from SDE to the stochastic dynamic programming equation by showing how this arises as average of the limit of a sequence of deterministic dynamic programming equations. The stochastic characteristics method of Kunita [13] is used to represent the value function. By choosing the Lagrange multiplier equal to its nonanticipative constraint value the usual stochastic (nonanticipative) optimal control and optimal cost are recovered. This suggests a method for solving the anticipative control problems by almost sure deterministic optimal control. We obtain a PDE for the “cost of perfect information” the difference between the cost function of the nonanticipative control problem and the cost of the anticipative problem which satisfies a nonlinear backward HJB SPDE. Poisson bracket conditions are found ensuring this has a global solution. The cost of perfect information is shown to be zero when a Lagrangian submanifold is invariant for the stochastic characteristics. The LQG problem and a nonlinear anticipative control problem are considered as examples in this framework  相似文献   

3.

In this paper, we are concerned with optimal control problems where the system is driven by a stochastic differential equation of the Ito type. We study the relaxed model for which an optimal solution exists. This is an extension of the initial control problem, where admissible controls are measure valued processes. Using Ekeland's variational principle and some stability properties of the corresponding state equation and adjoint processes, we establish necessary conditions for optimality satisfied by an optimal relaxed control. This is the first version of the stochastic maximum principle that covers relaxed controls.  相似文献   

4.
The purpose of this paper is to derive some pointwise second-order necessary conditions for stochastic optimal controls in the general case that the control variable enters into both the drift and the diffusion terms. When the control region is convex, a pointwise second-order necessary condition for stochastic singular optimal controls in the classical sense is established; while when the control region is allowed to be nonconvex, we obtain a pointwise second-order necessary condition for stochastic singular optimal controls in the sense of Pontryagin-type maximum principle. It is found that, quite different from the first-order necessary conditions, the correction part of the solution to the second-order adjoint equation appears in the pointwise second-order necessary conditions whenever the diffusion term depends on the control variable, even if the control region is convex.  相似文献   

5.
Near-optimal controls are as important as optimal controls for both theory and applications. Meanwhile, using inhibitor to control harmful microorganisms and ensure maximum growth of beneficial microorganisms (target microorganisms) is a very interesting topic in the chemostat. Thus, in this paper, we consider a stochastic chemostat model with non-zero cost inhibiting in finite time. The near-optimal control problem was constructed by minimizing the number of harmful microorganisms and minimizing the cost of inhibitor. We find that the Hamiltonian function is key to estimate objective function, and according to the adjoint equation, we obtain some error estimations of the near-optimality. Finally, we establish sufficient and necessary conditions for stochastic near-optimal controls of this model and numerical simulations and some conclusions are given.  相似文献   

6.
We study the stochastic regulator problem in Hilbert spaces for systems governed by linear stochastic differential equations with retarded controls and with state and control dependent noise. We use integral Riccati equations and no reference to a Riccati differential equation or to the Ito formula is made.  相似文献   

7.
The existence of insensitizing controls for a forward stochastic heat equation is considered. To develop the duality, we obtain observability estimates for linear forward and backward coupled stochastic heat equations with general coefficients, by means of some global Carleman estimates. Furthermore, the constant in the observability inequality is estimated by an explicit function of the norm of the involved coefficients in the equation. As far as we know, our paper is the first one to address the problem of insensitizing controls for stochastic partial differential equations.  相似文献   

8.
The purpose of this paper is to establish the first and second order necessary conditions for stochastic optimal controls in infinite dimensions. The control system is governed by a stochastic evolution equation, in which both drift and diffusion terms may contain the control variable and the set of controls is allowed to be nonconvex. Only one adjoint equation is introduced to derive the first order necessary optimality condition either by means of the classical variational analysis approach or, under an additional assumption, by using differential calculus of set-valued maps. More importantly, in order to avoid the essential difficulty with the well-posedness of higher order adjoint equations, using again the classical variational analysis approach, only the first and the second order adjoint equations are needed to formulate the second order necessary optimality condition, in which the solutions to the second order adjoint equation are understood in the sense of the relaxed transposition.  相似文献   

9.
This paper deals with the optimal control of space—time statistical behavior of turbulent fields. We provide a unified treatment of optimal control problems for the deterministic and stochastic Navier—Stokes equation with linear and nonlinear constitutive relations. Tonelli type ordinary controls as well as Young type chattering controls are analyzed. For the deterministic case with monotone viscosity we use the Minty—Browder technique to prove the existence of optimal controls. For the stochastic case with monotone viscosity, we combine the Minty—Browder technique with the martingale problem formulation of Stroock and Varadhan to establish existence of optimal controls. The deterministic models given in this paper also cover some simple eddy viscosity type turbulence closure models. Accepted 7 June 1999  相似文献   

10.
We consider a controlled system driven by a coupled forward–backward stochastic differential equation with a non degenerate diffusion matrix. The cost functional is defined by the solution of the controlled backward stochastic differential equation, at the initial time. Our goal is to find an optimal control which minimizes the cost functional. The method consists to construct a sequence of approximating controlled systems for which we show the existence of a sequence of feedback optimal controls. By passing to the limit, we establish the existence of a relaxed optimal control to the initial problem. The existence of a strict control follows from the Filippov convexity condition.  相似文献   

11.
An optimal control problem for a controlled backward stochastic partial differential equation in the abstract evolution form with a Bolza type performance functional is considered. The control domain is not assumed to be convex, and all coefficients of the system are allowed to be random. A variational formula for the functional in a given control process direction is derived, by the Hamiltonian and associated adjoint system. As an application, a global stochastic maximum principle of Pontraygins type for the optimal controls is established.  相似文献   

12.
This paper examines the value function of a partial hedging problem under model ambiguity. The study is based on a dual representation of the value function obtained by the authors. We formulate a family of control problems, whose value processes are characterized as solutions of a backward stochastic differential equation and give a sufficient condition to identify optimal controls.  相似文献   

13.
In this paper, we attempt to present a new numerical approach to solve non-linear backward stochastic differential equations. First, we present some definitions and theorems to obtain the conditions, from which we can approximate the non-linear term of the backward stochastic differential equation (BSDE) and we get a continuous piecewise linear BSDE correspond with the original BSDE. We use the relationship between backward stochastic differential equations and stochastic controls by interpreting BSDEs as some stochastic optimal control problems, to solve the approximated BSDE and we prove that the approximated solution converges to the exact solution of the original non-linear BSDE in two different cases.  相似文献   

14.
陈翰馥 《数学学报》1979,22(4):438-447
<正> 随机控制中的二次性能指标是一种十分重要的情况,对离散时间并对状态是线性的系统,问题已得到彻底解决.对连续时间系统,也已多有讨论,但迄今给出的证明还并不完满,所要求的条件也太强. 设(Ω,,P)是概率空间,{},0≤t≤T是以中的零测集完备了的非降  相似文献   

15.
16.
该文讨论了一类奇异型随机控制的平稳模型,其费用结构中的函数不限于偶函数,其状态过程为扩散型且具有“非对称的”(关于原点)漂移及扩散系数.因此,奇异型随机控制中的平稳问题被实质性地推广到更一般的形式。该文求得了与此类问题有关的一个变分方程组的解,并且证明了最佳控制的存在性.  相似文献   

17.

We consider a forward-backward system of stochastic evolution equations in a Hilbert space. Under nondegeneracy assumptions on the diffusion coefficient (that may be nonconstant) we prove an analogue of the well-known Bismut-Elworthy formula. Next, we consider a nonlinear version of the Kolmogorov equation, i.e. a deterministic quasilinear equation associated to the system according to Pardoux, E and Peng, S. (1992). "Backward stochastic differential equations and quasilinear parabolic partial differential equations". In: Rozowskii, B.L., Sowers, R.B. (Eds.), Stochastic Partial Differential Equations and Their Applications , Lecture Notes in Control Inf. Sci., Vol. 176, pp. 200-217. Springer: Berlin. The Bismut-Elworthy formula is applied to prove smoothing effect, i.e. to prove existence and uniqueness of a solution which is differentiable with respect to the space variable, even if the initial datum and (some) coefficients of the equation are not. The results are then applied to the Hamilton-Jacobi-Bellman equation of stochastic optimal control. This way we are able to characterize optimal controls by feedback laws for a class of infinite-dimensional control systems, including in particular the stochastic heat equation with state-dependent diffusion coefficient.  相似文献   

18.
The paper provides a systematic way for finding a partial differential equation that directly characterizes the optimal control, in the framework of one-dimensional stochastic control problems of Mayer type, with no constraints on the controls. The results obtained are applied to continuous-time portfolio problems.  相似文献   

19.
Given an unstable hybrid stochastic functional differential equation, how to design a delay feedback controller to make it stable? Some results have been obtained for hybrid systems with finite delay. However, the state of many stochastic differential equations are related to the whole history of the system, so it is necessary to discuss the feedback control of stochastic functional differential equations with infinite delay. On the other hand, in many practical stochastic models, the coefficients of these systems do not satisfy the linear growth condition, but are highly nonlinear. In this paper, the delay feedback controls are designed for a class of infinite delay stochastic systems with highly nonlinear and the influence of switching state.  相似文献   

20.
In Part I, methods of nonstandard analysis are applied to deterministic control theory, extending earlier work of the author. Results established include compactness of relaxed controls, continuity of solution and cost as functions of the controls, and existence of optimal controls. In Part II, the methods are extended to obtain similar results for partially observed stochastic control. Systems considered take the form:where the feedback control u depends on information from a digital read-out of the observation process y. The noise in the state equation is controlled along with the drift. Similar methods are applied to a Markov system in the final section.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号