首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
本文研究伊藤-泊松型随机微分方程的线性二次控制问题,利用动态规划方法、伊藤公式等技巧,通过解HJB方程,我们得到了随机Riccati方程及另外两个微分方程,求出控制变量,解决了线性二次最优控制最优问题.  相似文献   

2.
61. IntroductionLet (fi, F, P, {R}tZo) be a complete filtered probability space on which a standard onedimensional Brownian motion w(') is defined such that {R}tZo is the natural filtrationgenerated by w(.), augmented by all the p-null sets in i. We consider the following stateequationwhere T E T[0, TI, the set of all {R}tZo-stopping times taking values in [0, T], (E sigLlt (fi;IR"); A, B, C, D are matrix-valued {R}tZo-adapted bounded processes. In the above, u(.) EU[T, T]gLI(T, T…  相似文献   

3.
对随机递归最优控制问题即代价函数由特定倒向随机微分方程解来描述和递归混合最优控制问题即控制者还需 决定最优停止时刻, 得到了最优控制的存在性结果. 在一类等价概率测度集中,还给出了递归最优值函数的最小和最大数学期望.  相似文献   

4.
In this paper we study a stochastic partial differential equation (SPDE) with Hölder continuous coefficient driven by an α-stable colored noise. The pathwise uniqueness is proved by using a backward doubly stochastic differential equation backward (SDE) to take care of the Laplacian. The existence of solution is shown by considering the weak limit of a sequence of SDE system which is obtained by replacing the Laplacian operator in the SPDE by its discrete version. We also study an SDE system driven by Poisson random measures.  相似文献   

5.
We consider a controlled system driven by a coupled forward–backward stochastic differential equation with a non degenerate diffusion matrix. The cost functional is defined by the solution of the controlled backward stochastic differential equation, at the initial time. Our goal is to find an optimal control which minimizes the cost functional. The method consists to construct a sequence of approximating controlled systems for which we show the existence of a sequence of feedback optimal controls. By passing to the limit, we establish the existence of a relaxed optimal control to the initial problem. The existence of a strict control follows from the Filippov convexity condition.  相似文献   

6.
This work is concerned with numerical schemes for stochastic optimal control problems (SOCPs) by means of forward backward stochastic differential equations (FBSDEs). We first convert the stochastic optimal control problem into an equivalent stochastic optimality system of FBSDEs. Then we design an efficient second order FBSDE solver and an quasi-Newton type optimization solver for the resulting system. It is noticed that our approach admits the second order rate of convergence even when the state equation is approximated by the Euler scheme. Several numerical examples are presented to illustrate the effectiveness and the accuracy of the proposed numerical schemes.  相似文献   

7.
The paper is concerned with optimal control of backward stochastic differential equation (BSDE) driven by Teugel’s martingales and an independent multi-dimensional Brownian motion,where Teugel’s martin- gales are a family of pairwise strongly orthonormal martingales associated with Lévy processes (see e.g.,Nualart and Schoutens’ paper in 2000).We derive the necessary and sufficient conditions for the existence of the op- timal control by means of convex variation methods and duality techniques.As an application,the optimal control problem of linear backward stochastic differential equation with a quadratic cost criteria (or backward linear-quadratic problem,or BLQ problem for short) is discussed and characterized by a stochastic Hamilton system.  相似文献   

8.
The aim of the present paper is to study the regularity properties of the solution of a backward stochastic differential equation with a monotone generator in infinite dimension. We show some applications to the nonlinear Kolmogorov equation and to stochastic optimal control.  相似文献   

9.
In this paper we study mathematically and computationally optimal control problems for stochastic elliptic partial differential equations. The control objective is to minimize the expectation of a tracking cost functional, and the control is of the deterministic, distributed type. The main analytical tool is the Wiener-Itô chaos or the Karhunen-Loève expansion. Mathematically, we prove the existence of an optimal solution; we establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations; we represent the input data in their Wiener-Itô chaos expansions and deduce the deterministic optimality system of equations. Computationally, we approximate the optimality system through the discretizations of the probability space and the spatial space by the finite element method; we also derive error estimates in terms of both types of discretizations.  相似文献   

10.
In this paper, the authors investigate the optimal conversion rate at which land use is irreversibly converted from biodiversity conservation to agricultural production. This problem is formulated as a stochastic control model, then transformed into a HJB equation involving free boundary. Since the state equation has singularity, it is difficult to directly derive the boundary value condition for the HJB equation. They provide a new method to overcome the difficulty via constructing another auxiliary stochastic control problem,and impose a proper boundary value condition. Moreover, they establish the existence and uniqueness of the viscosity solution of the HJB equation. Finally, they propose a stable numerical method for the HJB equation involving free boundary, and show some numerical results.  相似文献   

11.
最优投资组合模型研究   总被引:6,自引:0,他引:6  
本文研究了在完备金融市场上 ,投资者最优投资组合的随机模型。在模型参数为常系数 ,效用函数为 (0 ,T],B[0 ,T])上的有界可测函数的情形下 ,得出其最大效用值函数是随机控制问题对应的 HJB方程的平滑解 ;最优策略被证明是存在的 ,并用反馈形式给出了最优投资组合策略。  相似文献   

12.
Abstract

In this article, we initiate a study on optimal control problem for linear stochastic differential equations with quadratic cost functionals under generalized expectation via backward stochastic differential equations.  相似文献   

13.
In a previous paper we gave a new formulation and derived the Euler equations and other necessary conditions to solve strong, pathwise, stochastic variational problems with trajectories driven by Brownian motion. Thus, unlike current methods which minimize the control over deterministic functionals (the expected value), we find the control which gives the critical point solution of random functionals of a Brownian path and then, if we choose, find the expected value.This increase in information is balanced by the fact that our methods are anticipative while current methods are not. However, our methods are more directly connected to the theory and meaningful examples of deterministic variational theory and provide better means of solution for free and constrained problems. In addition, examples indicate that there are methods to obtain nonanticipative solutions from our equations although the anticipative optimal cost function has smaller expected value.In this paper we give new, efficient numerical methods to find the solution of these problems in the quadratic case. Of interest is that our numerical solution has a maximal, a priori, pointwise error of O(h3/2) where h is the node size. We believe our results are unique for any theory of stochastic control and that our methods of proof involve new and sophisticated ideas for strong solutions which extend previous deterministic results by the first author where the error was O(h2).We note that, although our solutions are given in terms of stochastic differential equations, we are not using the now standard numerical methods for stochastic differential equations. Instead we find an approximation to the critical point solution of the variational problem using relations derived from setting to zero the directional derivative of the cost functional in the direction of simple test functions.Our results are even more significant than they first appear because we can reformulate stochastic control problems or constrained calculus of variations problems in the unconstrained, stochastic calculus of variations formulation of this paper. This will allow us to find efficient and accurate numerical solutions for general constrained, stochastic optimization problems. This is not yet being done, even in the deterministic case, except by the first author.  相似文献   

14.
This paper is concerned with Kalman-Bucy filtering problems of a forward and backward stochastic system which is a Hamiltonian system arising from a stochastic optimal control problem. There are two main contributions worthy pointing out. One is that we obtain the Kalman-Bucy filtering equation of a forward and backward stochastic system and study a kind of stability of the aforementioned filtering equation. The other is that we develop a backward separation technique, which is different to Wonham's separation theorem, to study a partially observed recursive optimal control problem. This new technique can also cover some more general situation such as a partially observed linear quadratic non-zero sum differential game problem is solved by it. We also give a simple formula to estimate the information value which is the difference of the optimal cost functionals between the partial and the full observable information cases.  相似文献   

15.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

16.
This paper investigates an investment-reinsurance problem for an insurance company that has a possibility to choose among different business activities, including reinsurance/new business and security investment. Our main objective is to find the optimal policy to minimize its probability of ruin. The main novelty of this paper is the introduction of a dynamic Value-at-Risk (VaR) constraint. This provides a way to control risk and to fulfill the requirement of regulators on market risk. This problem is formulated as an infinite horizontal stochastic control problem with a constrained control space. The dynamic programming technique is applied to derive the Hamilton-Jacobi-Bellman (HJB) equation and the Lagrange multiplier method is used to tackle the dynamic VaR constraint. Closed-form expressions for the minimal ruin probability as well as the optimal investment-reinsurance/new business policy are derived. It turns out that the risk exposure of the insurance company subject to the dynamic VaR constraint is always lower than otherwise. Finally, a numerical example is given to illustrate our results.  相似文献   

17.
We study the Riccati equation arising in a class of quadratic optimal control problems with infinite dimensional stochastic differential state equation and infinite horizon cost functional. We allow the coefficients, both in the state equation and in the cost, to be random. In such a context backward stochastic Riccati equations are backward stochastic differential equations in the whole positive real axis that involve quadratic non-linearities and take values in a non-Hilbertian space. We prove existence of a minimal non-negative solution and, under additional assumptions, its uniqueness. We show that such a solution allows to perform the synthesis of the optimal control and investigate its attractivity properties. Finally the case where the coefficients are stationary is addressed and an example concerning a controlled wave equation in random media is proposed.  相似文献   

18.
This paper treats a finite time horizon optimal control problem in which the controlled state dynamics are governed by a general system of stochastic functional differential equations with a bounded memory. An infinite dimensional Hamilton–Jacobi–Bellman (HJB) equation is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJB equation.  相似文献   

19.
赵卫东 《计算数学》2015,37(4):337-373
1990年,Pardoux和Peng(彭实戈)解决了非线性倒向随机微分方程(backward stochastic differential equation,BSDE)解的存在唯一性问题,从而建立了正倒向随机微分方程组(forward backward stochastic differential equations,FBSDEs)的理论基础;之后,正倒向随机微分方程组得到了广泛研究,并被应用于众多研究领域中,如随机最优控制、偏微分方程、金融数学、风险度量、非线性期望等.近年来,正倒向随机微分方程组的数值求解研究获得了越来越多的关注,本文旨在基于正倒向随机微分方程组的特性,介绍正倒向随机微分方程组的主要数值求解方法.我们将重点介绍讨论求解FBSDEs的积分离散法和微分近似法,包括一步法和多步法,以及相应的数值分析和理论分析结果.微分近似法能构造出求解全耦合FBSDEs的高效高精度并行数值方法,并且该方法采用最简单的Euler方法求解正向随机微分方程,极大地简化了问题求解的复杂度.文章最后,我们尝试提出关于FBSDEs数值求解研究面临的一些亟待解决和具有挑战性的问题.  相似文献   

20.
In this paper, we use the solutions of forward-backward stochastic differential equations to get the optimal control for backward stochastic linear quadratic optimal control problem. And we also give the linear feedback regulator for the optimal control problem by using the solutions of a group of Riccati equations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号