首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We consider a control problem for the stochastic heat equation with Neumann boundary condition, where controls and noise terms are defined inside the domain as well as on the boundary. The noise terms are given by independent Q-Wiener processes. Under some assumptions, we derive necessary and sufficient optimality conditions stochastic controls have to satisfy. Using these optimality conditions, we establish explicit formulas with the result that stochastic optimal controls are given by feedback controls. This is an important conclusion to ensure that the controls are adapted to a certain filtration. Therefore, the state is an adapted process as well.  相似文献   

2.
The purpose of this paper is to establish the first and second order necessary conditions for stochastic optimal controls in infinite dimensions. The control system is governed by a stochastic evolution equation, in which both drift and diffusion terms may contain the control variable and the set of controls is allowed to be nonconvex. Only one adjoint equation is introduced to derive the first order necessary optimality condition either by means of the classical variational analysis approach or, under an additional assumption, by using differential calculus of set-valued maps. More importantly, in order to avoid the essential difficulty with the well-posedness of higher order adjoint equations, using again the classical variational analysis approach, only the first and the second order adjoint equations are needed to formulate the second order necessary optimality condition, in which the solutions to the second order adjoint equation are understood in the sense of the relaxed transposition.  相似文献   

3.
In this paper, we study discounted Markov decision processes on an uncountable state space. We allow a utility (reward) function to be unbounded both from above and below. A new feature in our approach is an easily verifiable rate of growth condition introduced for a positive part of the utility function. This assumption, in turn, enables us to prove the convergence of a value iteration algorithm to a solution to the Bellman equation. Moreover, by virtue of the optimality equation we show the existence of an optimal stationary policy.  相似文献   

4.
研究了由Teugels鞅和与之独立的多维Brown运动共同驱动的正倒向随机控制系统的最优控制问题. 这里Teugels鞅是一列与L\'{e}vy 过程相关的两两强正交的正态鞅 (见Nualart, Schoutens 在2000年的结果). 在允许控制值域为一非空凸闭集假设下, 采用凸变分法和对偶技术获得了最优控制存在所满足的充分和必要条件. 作为应用, 系统研究了线性正倒向随机系统的二次最优控制问题(简记为FBLQ问题), 通过相应的随机哈密顿系统对最优控制 进行了对偶刻画. 这里的随机哈密顿系统是由Teugels鞅和多维Brown运动共同驱动的线性正倒向随机微分方程, 其由状态方程、伴随方程和最优控制的对偶表示共同来构成.  相似文献   

5.

In this paper, we are concerned with optimal control problems where the system is driven by a stochastic differential equation of the Ito type. We study the relaxed model for which an optimal solution exists. This is an extension of the initial control problem, where admissible controls are measure valued processes. Using Ekeland's variational principle and some stability properties of the corresponding state equation and adjoint processes, we establish necessary conditions for optimality satisfied by an optimal relaxed control. This is the first version of the stochastic maximum principle that covers relaxed controls.  相似文献   

6.
Abstract

We study the problem of optimal control of a jump diffusion, that is, a process which is the solution of a stochastic differential equation driven by Lévy processes. It is required that the control process is adapted to a given subfiltration of the filtration generated by the underlying Lévy processes. We prove two maximum principles (one sufficient and one necessary) for this type of partial information control. The results are applied to a partial information mean-variance portfolio selection problem in finance.  相似文献   

7.
在本文中,我们证明了一类部分信息的随机控制问题的极值原理的一个充分条件和一个必要条件.其中,随机控制问题的控制系统是一个由鞅和Brown运动趋动的随机偏微分方程.  相似文献   

8.
In this paper, necessary conditions of optimality, in the form of a maximum principle, are obtained for singular stochastic control problems. This maximum principle is derived for a state process satisfying a general stochastic differential equation where the coefficient associated to the control process can be dependent on the state, extending earlier results of the literature.  相似文献   

9.
We study a class of hyperbolic stochastic partial differential equations in Euclidean space, that includes the wave equation and the telegraph equation, driven by Gaussian noise concentrated on a hyperplane. The noise is assumed to be white in time but spatially homogeneous within the hyperplane. Two natural notions of solutions are function-valued solutions and random field solutions. For the linear form of the equations, we identify the necessary and sufficient condition on the spectral measure of the spatial covariance for existence of each type of solution, and it turns out that the conditions differ. In spatial dimensions 2 and 3, under the condition for existence of a random field solution to the linear form of the equation, we prove existence and uniqueness of a random field solution to non-linear forms of the equation.

  相似文献   


10.
Continuous-time mean-variance portfolio selection model with nonlinear wealth equations and bankruptcy prohibition is investigated by the dual method. A necessary and sufficient condition which the optimal terminal wealth satisfies is obtained through a terminal perturbation technique. It is also shown that the optimal wealth and portfolio is the solution of a forward-backward stochastic differential equation with constraints.  相似文献   

11.
Ocone and Pardoux have introduced a stochastic differential equation in which the initial condition and the drift depend on the driving Brownian motion in an anticipative way. In this paper we prove a limit theorem for such equations when the Brownian motion is approximated by a sequence of piecewise linear processes  相似文献   

12.
The objective of the paper is to investigate the approximate controllability property of a linear stochastic control system with values in a separable real Hilbert space. In a first step we prove the existence and uniqueness for the solution of the dual linear backward stochastic differential equation. This equation has the particularity that in addition to an unbounded operator acting on the Y-component of the solution there is still another one acting on the Z-component. With the help of this dual equation we then deduce the duality between approximate controllability and observability. Finally, under the assumption that the unbounded operator acting on the state process of the forward equation is an infinitesimal generator of an exponentially stable semigroup, we show that the generalized Hautus test provides a necessary condition for the approximate controllability. The paper generalizes former results by Buckdahn, Quincampoix and Tessitore (Stochastic Partial Differential Equations and Applications, Series of Lecture Notes in Pure and Appl. Math., vol. 245, pp. 253–260, Chapman and Hall, London, 2006) and Goreac (Applied Analysis and Differential Equations, pp. 153–164, World Scientific, Singapore, 2007) from the finite dimensional to the infinite dimensional case.  相似文献   

13.
Piecewise deterministic Markov processes (PDPs) are continuous time homogeneous Markov processes whose trajectories are solutions of ordinary differential equations with random jumps between the different integral curves. Both continuous deterministic motion and the random jumps of the processes are controlled in order to minimize the expected value of a performance criterion involving discounted running and boundary costs. Under fairly general assumptions, we will show that there exists an optimal control, that the value function is Lipschitz continuous and that a generalized Bellman-Hamilton-Jacobi (BHJ) equation involving the Clarke generalized gradient is a necessary and sufficient optimality condition for the problem.  相似文献   

14.
The well-known theorem of T. Yamada and S. Watanabe asserts that (weak) existence and pathwise uniqueness of the solution of a stochastic equation implies the existence of a strong solution. This is the most powerful tool for proving that a stochastic equation possesses a strong solution. However, pathwise uniqueness is far from being a necessary condition for this. Even if the solution is not unique in law it is also of interest to ask for strong solutions. In the present note, we will discuss in more detail the connection between pathwise uniqueness and the existence of a strong solution. We will state a condition which is not only sufficient but also necessary for the existence of a strong solution.  相似文献   

15.
The trajectories of piecewise deterministic Markov processes are solutions of an ordinary (vector)differential equation with possible random jumps between the different integral curves. Both continuous deterministic motion and the random jumps of the processes are controlled in order to minimize the expected value of a performance functional consisting of continuous, jump and terminal costs. A limiting form of the Hamilton-Jacobi-Bellman partial differential equation is shown to be a necessary and sufficient optimality condition. The existence of an optimal strategy is proved and acharacterization of the value function as supremum of smooth subsolutions is also given. The approach consists of imbedding the original control problem tightly in a convex mathematical programming problem on the space of measures and then solving the latter by dualit  相似文献   

16.
In this article, we consider a filtering problem for forward-backward stochastic systems that are driven by Brownian motions and Poisson processes. This kind of filtering problem arises from the study of partially observable stochastic linear-quadratic control problems. Combining forward-backward stochastic differential equation theory with certain classical filtering techniques, the desired filtering equation is established. To illustrate the filtering theory, the theoretical result is applied to solve a partially observable linear-quadratic control problem, where an explicit observable optimal control is determined by the optimal filtering estimation.  相似文献   

17.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

18.
An analysis is given of the state and first-come, first-served waiting time processes for a three stage queueing system with no waiting space between stages, but with limited space before the first. Although the basic processes are exponential, no detailed analysis in this generality appears to have been made before; the nearest analyses entail the simplifying assumption that the first stage is never empty. A sufficient condition is derived easily for equilibrium to exist and it can be asserted with virtual certainty that it is also necessary; the complexity of calculation has so far excluded a proper proof, though this is in principle a possibility.The objective is to provide a theoretical framework easily adaptable for a numerical assessment of system performance to be made. Some typical tables with comments are given.  相似文献   

19.
This article deals with a stochastic control problem for certain fluids of non-Newtonian type. More precisely, the state equation is given by the two-dimensional stochastic second grade fluids perturbed by a multiplicative white noise. The control acts through an external stochastic force and we search for a control that minimizes a cost functional. We show that the Gâteaux derivative of the control to state map is a stochastic process being the unique solution of the stochastic linearized state equation. The well-posedness of the corresponding stochastic backward adjoint equation is also established, allowing to derive the first order optimality condition.  相似文献   

20.
The present paper studies the stochastic maximum principle in singular optimal control, where the state is governed by a stochastic differential equation with nonsmooth coefficients, allowing both classical control and singular control. The proof of the main result is based on the approximation of the initial problem, by a sequence of control problems with smooth coefficients. We, then apply Ekeland's variational principle for this approximating sequence of control problems, in order to establish necessary conditions satisfied by a sequence of near optimal controls. Finally, we prove the convergence of the scheme, using Krylov's inequality in the nondegenerate case and the Bouleau-Hirsch flow property in the degenerate one. The adjoint process obtained is given by means of distributional derivatives of the coefficients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号