首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
讨论线性二次最优控制问题, 其随机系统是由 L\'{e}vy 过程驱动的具有随机系数而且还具有仿射项的线性随机微分方程. 伴随方程具有无界系数, 其可解性不是显然的. 利用 $\mathscr{B}\mathscr{M}\mathscr{O}$ 鞅理论, 证明伴随方程在有限 时区解的存在唯一性. 在稳定性条件下, 无限时区的倒向随机 Riccati 微分方程和伴随倒向随机方程的解的存在性是通过对应有限 时区的方程的解来逼近的. 利用这些解能够合成最优控制.  相似文献   

2.
In this paper we study the existence of the optimal (minimizing) control for a tracking problem, as well as a quadratic cost problem subject to linear stochastic evolution equations with unbounded coefficients in the drift. The backward differential Riccati equation (BDRE) associated with these problems (see [2], for finite dimensional stochastic equations or [21], for infinite dimensional equations with bounded coefficients) is in general different from the conventional BDRE (see [10], [18]). Under stabilizability and uniform observability conditions and assuming that the control weight-costs are uniformly positive, we establish that BDRE has a unique, uniformly positive, bounded on ℝ + and stabilizing solution. Using this result we find the optimal control and the optimal cost. It is known [18] that uniform observability does not imply detectability and consequently our results are different from those obtained under detectability conditions (see [10]).   相似文献   

3.
61. IntroductionLet (fi, F, P, {R}tZo) be a complete filtered probability space on which a standard onedimensional Brownian motion w(') is defined such that {R}tZo is the natural filtrationgenerated by w(.), augmented by all the p-null sets in i. We consider the following stateequationwhere T E T[0, TI, the set of all {R}tZo-stopping times taking values in [0, T], (E sigLlt (fi;IR"); A, B, C, D are matrix-valued {R}tZo-adapted bounded processes. In the above, u(.) EU[T, T]gLI(T, T…  相似文献   

4.
Stochastic Linear Quadratic Optimal Control Problems   总被引:2,自引:0,他引:2  
This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward—backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well. Accepted 15 May 2000. Online publication 1 December 2000  相似文献   

5.
Backward stochastic Riccati equations are motivated by the solution of general linear quadratic optimal stochastic control problems with random coefficients, and the solution has been open in the general case. One distinguishing difficult feature is that the drift contains a quadratic term of the second unknown variable. In this paper, we obtain the global existence and uniqueness result for a general one-dimensional backward stochastic Riccati equation. This solves the one-dimensional case of Bismut–Peng's problem which was initially proposed by Bismut (Lecture Notes in Math. 649 (1978) 180). We use an approximation technique by constructing a sequence of monotone drifts and then passing to the limit. We make full use of the special structure of the underlying Riccati equation. The singular case is also discussed. Finally, the above results are applied to solve the mean–variance hedging problem with general random market conditions.  相似文献   

6.
In this article, we consider a linear-quadratic optimal control problem (LQ problem) for a controlled linear stochastic differential equation driven by a multidimensional Browinan motion and a Poisson random martingale measure in the general case, where the coefficients are allowed to be predictable processes or random matrices. By the duality technique, the dual characterization of the optimal control is derived by the optimality system (so-called stochastic Hamilton system), which turns out to be a linear fully coupled forward-backward stochastic differential equation with jumps. Using a decoupling technique, the connection between the stochastic Hamilton system and the associated Riccati equation is established. As a result, the state feedback representation is obtained for the optimal control. As the coefficients for the LQ problem are random, here, the associated Riccati equation is a highly nonlinear backward stochastic differential equation (BSDE) with jumps, where the generator depends on the unknown variables K, L, and H in a quadratic way (see (5.9) herein). For the case where the generator is bounded and is linearly dependent on the unknown martingale terms L and H, the existence and uniqueness of the solution for the associated Riccati equation are established by Bellman's principle of quasi-linearization.  相似文献   

7.
We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

8.
??We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

9.
We consider an average quadratic cost criteria for affine stochastic differential equations with almost-periodic coefficients. Under stabilizability and detectability conditions we show that the Riccati equation associated with the quadratic control problem has a unique almost-periodic solution. In the periodic case the corresponding result is proved in [4].  相似文献   

10.
We consider the infinite horizon quadratic cost minimization problem for a stable time-invariant well-posed linear system in the sense of Salamon and Weiss, and show that it can be reduced to a spectral factorization problem in the control space. More precisely, we show that the optimal solution of the quadratic cost minimization problem is of static state feedback type if and only if a certain spectral factorization problem has a solution. If both the system and the spectral factor are regular, then the feedback operator can be expressed in terms of the Riccati operator, and the Riccati operator is a positive self-adjoint solution of an algebraic Riccati equation. This Riccati equation is similar to the usual algebraic Riccati equation, but one of its coefficients varies depending on the subspace in which the equation is posed. Similar results are true for unstable systems, as we have proved elsewhere.

  相似文献   


11.
Mihai Popescu 《PAMM》2008,8(1):10899-10900
This study refers to minimization of quadratic functionals in infinite time. The coefficients of the quadratic form are quadratic matrices, function of the state variable. Dynamic constraints are represented by a bilinear differential systems of the form. The necessary extremum conditions determine the adjoint variables λ and the control variables u as functions of state variable, respectively the adjoint system corresponding to those functions. Thus it will be obtained a matrix differential equation where the solution representing the positive defined symmetric matrix P ( x ), verifies the Riccati algebraic equation. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
The treatment of the stochastic linear quadratic optimal control problem with finite time horizon requires the solution of stochastic differential Riccati equations. We propose efficient numerical methods, which exploit the particular structure and can be applied for large‐scale systems. They are based on numerical methods for ordinary differential equations such as Rosenbrock methods, backward differentiation formulas, and splitting methods. The performance of our approach is tested in numerical experiments.  相似文献   

13.
本文研究伊藤-泊松型随机微分方程的线性二次控制问题,利用动态规划方法、伊藤公式等技巧,通过解HJB方程,我们得到了随机Riccati方程及另外两个微分方程,求出控制变量,解决了线性二次最优控制最优问题.  相似文献   

14.
In this paper, a large class of time-varying Riccati equations arising in stochastic dynamic games is considered. The problem of the existence and uniqueness of some globally defined solution, namely the bounded and stabilizing solution, is investigated. As an application of the obtained existence results, we address in a second step the problem of infinite-horizon zero-sum two players linear quadratic (LQ) dynamic game for a stochastic discrete-time dynamical system subject to both random switching of its coefficients and multiplicative noise. We show that in the solution of such an optimal control problem, a crucial role is played by the unique bounded and stabilizing solution of the considered class of generalized Riccati equations.  相似文献   

15.
In this paper, we consider a linear–quadratic stochastic two-person nonzero-sum differential game. Open-loop and closed-loop Nash equilibria are introduced. The existence of the former is characterized by the solvability of a system of forward–backward stochastic differential equations, and that of the latter is characterized by the solvability of a system of coupled symmetric Riccati differential equations. Sometimes, open-loop Nash equilibria admit a closed-loop representation, via the solution to a system of non-symmetric Riccati equations, which could be different from the outcome of the closed-loop Nash equilibria in general. However, it is found that for the case of zero-sum differential games, the Riccati equation system for the closed-loop representation of an open-loop saddle point coincides with that for the closed-loop saddle point, which leads to the conclusion that the closed-loop representation of an open-loop saddle point is the outcome of the corresponding closed-loop saddle point as long as both exist. In particular, for linear–quadratic optimal control problem, the closed-loop representation of an open-loop optimal control coincides with the outcome of the corresponding closed-loop optimal strategy, provided both exist.  相似文献   

16.
The infinite dimensional version of the linear quadratic cost control problem is studied by Curtain and Pritchard [2], Gibson [5] by using Riccati integral equations, instead of differential equations. In the present paper the corresponding stochastic case over a finite horizon is considered. The stochastic perturbations are given by Hilbert valued square integrable martingales and it is shown that the deterministic optimal feedback control is also optimal in the stochastic case. Sufficient conditions are given for the convergence of approximate solutions of optimal control problems.  相似文献   

17.
In this paper,we consider an optimal control problem with state constraints,where the control system is described by a mean-field forward-backward stochastic differential equation(MFFBSDE,for short)and the admissible control is mean-field type.Making full use of the backward stochastic differential equation theory,we transform the original control system into an equivalent backward form,i.e.,the equations in the control system are all backward.In addition,Ekeland’s variational principle helps us deal with the state constraints so that we get a stochastic maximum principle which characterizes the necessary condition of the optimal control.We also study a stochastic linear quadratic control problem with state constraints.  相似文献   

18.
This paper studies the quadratic cost control problem, overan infinite time interval, for systems defined by integral equationsgiven in terms of semigroups. Conditions are imposed which allowunbounded control action to be considered. It is shown thatsolution to the problem leads to an integral Riccati equationwith unique solution. The integral Riccati equation may be differentiatedand conditions are given under which the differential Riccatiequation also has unique solution.  相似文献   

19.
给出一类正倒向随机微分方程解的存在唯一性结果,应用这个结果研究了一类新的推广的随机线性二次最优控制器的设计问题,得到了由正倒向随机微分方程解所表示的唯一最优控制器的显式结构;在推广的Riccati方程系统基础上,得到最优控制器精确的线性反馈形式.最后,给出了随机线性二次最优控制器的设计算法.  相似文献   

20.
This paper is devoted to forward-backward systems of stochastic differential equations in which the forward equation is not coupled to the backward one, both equations are infinite dimensional and on the time interval [0, + ∞). The forward equation defines an Ornstein-Uhlenbeck process, the driver of the backward equation has a linear part which is the generator of a strongly continuous, dissipative, compact semigroup, and a nonlinear part which is assumed to be continuous with linear growth. Under the assumption of equivalence of the laws of the solution to the forward equation, we prove the existence of a solution to the backward equation. We apply our results to a stochastic game problem with infinitely many players.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号