首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, the canonical dual function (Gao, 2004 [4]) is used to solve a global optimization. We find global minimizers by backward differential flows. The backward flow is created by the local solution to the initial value problem of an ordinary differential equation. Some examples and applications are presented.  相似文献   

2.
We study the Pontryagin maximum principle for an optimal control problem with state constraints. We analyze the continuity of a vector function µ (which is one of the Lagrange multipliers corresponding to an extremal by virtue of the maximum principle) at the points where the extremal trajectory meets the boundary of the set given by the state constraints. We obtain sufficient conditions for the continuity of µ in terms of the smoothness of the extremal trajectory.  相似文献   

3.
We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

4.
??We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

5.
We formulate an extremal problem of constructing a trajectory of a moving object that is farthest from a group of observers with fixed visibility cones. Under some constraints on the arrangement of the observers, we give a characterization and a method of construction of an optimal trajectory.  相似文献   

6.
We address a general optimal switching problem over finite horizon for a stochastic system described by a differential equation driven by Brownian motion. The main novelty is the fact that we allow for infinitely many modes (or regimes, i.e. the possible values of the piecewise-constant control process). We allow all the given coefficients in the model to be path-dependent, that is, their value at any time depends on the past trajectory of the controlled system. The main aim is to introduce a suitable (scalar) backward stochastic differential equation (BSDE), with a constraint on the martingale part, that allows to give a probabilistic representation of the value function of the given problem. This is achieved by randomization of control, i.e. by introducing an auxiliary optimization problem which has the same value as the starting optimal switching problem and for which the desired BSDE representation is obtained. In comparison with the existing literature we do not rely on a system of reflected BSDE nor can we use the associated Hamilton–Jacobi–Bellman equation in our non-Markovian framework.  相似文献   

7.
将经典LQ问题的评价泛函中关于控制变量的二次型推广为一类偶次多项式,证明了这类广义LQ无约束最优控制问题的一个等价扩张逼近可由一列半径递增的球约束最优控制问题加以实现.进而利用P0ntryagin极值原理建立相应的球约束最优控制问题的二次规划,并通过Canonical倒向微分流及不动点定理,求解常微分方程边值问题,得到球约束最优控制问题的最优值.随着约束球半径趋于无穷大,形成原广义LQ最优控制问题的一个极小化序列,从而得到原问题的最优值.  相似文献   

8.
A new, generalized and strengthened, form of an assertion about an extremum of a linear-fractional integral functional given on a set of probability measures is presented. It is shown that the solution of the extremal problem for such a functional is completely determined by the extremal properties of the so-called test function, which is the ratio of the integrands of the numerator and the denominator. On the basis of this assertion, a theorem on an optimal strategy for controlling a semi-Markov process with a finite set of states is proved. In particular, it is established that if the test function of the objective functional of a control problem attains a global extremum, then an optimal control strategy exists, is deterministic, and is determined by the point of global extremum. The corresponding assertions are also obtained for the case where the test function does not attain the global extremum.  相似文献   

9.
This paper examines the value function of a partial hedging problem under model ambiguity. The study is based on a dual representation of the value function obtained by the authors. We formulate a family of control problems, whose value processes are characterized as solutions of a backward stochastic differential equation and give a sufficient condition to identify optimal controls.  相似文献   

10.
The Ritt problem asks if there is an algorithm that decides whether one prime differential ideal is contained in another one if both are given by their characteristic sets. We give several equivalent formulations of this problem. In particular, we show that it is equivalent to testing whether a differential polynomial is a zero divisor modulo a radical differential ideal. The technique used in the proof of this equivalence yields algorithms for computing a canonical decomposition of a radical differential ideal into prime components and a canonical generating set of a radical differential ideal. Both proposed representations of a radical differential ideal are independent of the given set of generators and can be made independent of the ranking.  相似文献   

11.
We consider a relaxed optimal control problem for systems defined by nonlinear parabolic partial differential equations with distributed control. The problem is completely discretized by using a finite-element approximation scheme with piecewise linear states and piecewise constant controls. Existence of optimal controls and necessary conditions for optimality are derived for both the continuous and the discrete problem. We then prove that accumulation points of sequences of discrete optimal [resp. extremal] controls are optimal [resp. extremal] for the continuous problem.  相似文献   

12.
We prove a duality theorem for the stochastic optimal control problem with a convex cost function and show that the minimizer satisfies a class of forward–backward stochastic differential equations. As an application, we give an approach, from the duality theorem, to hh-path processes for diffusion processes.  相似文献   

13.
This paper presents some applications of the canonical dual theory in optimal control problems. The analytic solutions of several nonlinear and nonconvex problems are investigated by global optimizations. It turns out that the backward differential flow defined by the KKT equation may reach the globally optimal solution. The analytic solution to an optimal control problem is obtained via the expression of the co-state. Some examples are illustrated.  相似文献   

14.
We consider an optimal control problem for a nonconvex control system under state constraints and the associated value function, which in general is not differentiable. We provide some characterizations of optimal trajectories using contingent derivatives. For this aim, we derive a costate satisfying the adjoint equation, the maximum principle, and a transversality condition linked to the superdifferential of the value function.Communicated by F. ZirilliThis paper is dedicated by the author to her children.  相似文献   

15.
Analogs of certain conjugate point properties in the calculus of variations are developed for optimal control problems. The main result in this direction is concerned with the characterization of a parameterized family of extremals going through the first backward conjugate point, tc. A corollary of this result is that for the linear quadratic problem (LQP) there exists at least a one-parameter family of extremals going though the conjugate point which gives the same cost as the candidate extremal, i.e., the extremal control is optimal but nonunique on [tc, tf]. An analysis of the effect on the conjugate point of employing penalty functions for terminal equality constraints in the LQP is presented, also. It is shown that the sequence of approximate conjugate points is always conservative, and it converges to the conjugate point of the constrained problem. Furthermore, it is proved that the addition of terminal constraints has the effect of causing the conjugate point to move backward (or remain the same).  相似文献   

16.
For a zero-sum differential game, we consider an algorithm for constructing optimal control strategies with the use of backward minimax constructions. The dynamics of the game is not necessarily linear, the players’ controls satisfy geometric constraints, and the terminal payoff function satisfies the Lipschitz condition and is compactly supported. The game value function is computed by multilinear interpolation of grid functions. We show that the algorithm error can be arbitrarily small if the discretization step in time is sufficiently small and the discretization step in the state space has a higher smallness order than the time discretization step. We show that the algorithm can be used for differential games with a terminal set. We present the results of computations for a problem of conflict control of a nonlinear pendulum.  相似文献   

17.
This paper is concerned with Kalman-Bucy filtering problems of a forward and backward stochastic system which is a Hamiltonian system arising from a stochastic optimal control problem. There are two main contributions worthy pointing out. One is that we obtain the Kalman-Bucy filtering equation of a forward and backward stochastic system and study a kind of stability of the aforementioned filtering equation. The other is that we develop a backward separation technique, which is different to Wonham's separation theorem, to study a partially observed recursive optimal control problem. This new technique can also cover some more general situation such as a partially observed linear quadratic non-zero sum differential game problem is solved by it. We also give a simple formula to estimate the information value which is the difference of the optimal cost functionals between the partial and the full observable information cases.  相似文献   

18.
The paper describes a continuous second-variation method to solve optimal control problems with terminal constraints where the control is defined on a closed set. The integration of matrix differential equations based on a second-order expansion of a Lagrangian provides linear updates of the control and a locally optimal feedback controller. The process involves a backward and a forward integration stage, which require storing trajectories. A method has been devised to store continuous solutions of ordinary differential equations and compute accurately the continuous expansion of the Lagrangian around a nominal trajectory. Thanks to the continuous approach, the method adapts implicitly the numerical time mesh and provides precise gradient iterates to find an optimal control. The method represents an evolution to the continuous case of discrete second-order techniques of optimal control. The novel method is demonstrated on bang–bang optimal control problems, showing its suitability to identify automatically optimal switching points in the control without insight into the switching structure or a choice of the time mesh. A complex space trajectory problem is tackled to demonstrate the numerical robustness of the method to problems with different time scales.  相似文献   

19.
The control problem is considered with minimization of the guaranteed result for a system described by an ordinary differential equation in the presence of uncontrolled noise. The concepts and formulation of the problem in /1/ are used. It is shown that, when forming the optimal control by the method of programmed stochastic synthesis /1–3/, the extremal shift at the accompanying point /1, 4/ can be reduced to extremal shift agianst the gradient of the appropriate function. This explains the connection between the programmed stochastic synthesis and the generalized Hamilton-Jacobi equation /5, 6/ in the theory of differential games.  相似文献   

20.
Abstract

In this article, we derive the existence and uniqueness of the solution for a class of generalized reflected backward stochastic differential equation involving the integral with respect to a continuous process, which is the local time of the diffusion on the boundary, in using the penalization method. We also give a characterization of the solution as the value function of an optimal stopping time problem. Then we give a probabilistic formula for the viscosity solution of an obstacle problem for PDEs with a nonlinear Neumann boundary condition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号