首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
In this paper we study an optimal control problem, where states of a control system are described by impulsive differential equations with nonlocal boundary conditions. With the help of the contraction principle we prove the existence and uniqueness of a solution to the corresponding boundary value problem with fixed admissible controls. We calculate the first and second variation of the functional. Using the variation of controls, we establish various necessary optimality conditions of the second order.  相似文献   

2.
We derive second-order sufficient optimality conditions for discontinuous controls in optimal control problems of ordinary differential equations with initial-final state constraints and mixed state-control constraints of equality and inequality type. Under the assumption that the gradients with respect to the control of active mixed constraints are linearly independent, the sufficient conditions imply a bounded strong minimum in the problem.  相似文献   

3.

In this paper, we are concerned with optimal control problems where the system is driven by a stochastic differential equation of the Ito type. We study the relaxed model for which an optimal solution exists. This is an extension of the initial control problem, where admissible controls are measure valued processes. Using Ekeland's variational principle and some stability properties of the corresponding state equation and adjoint processes, we establish necessary conditions for optimality satisfied by an optimal relaxed control. This is the first version of the stochastic maximum principle that covers relaxed controls.  相似文献   

4.
We study an optimal control problem in which the plant state is described by impulsive differential equations with nonlocal boundary conditions. By using the contraction mapping principle, we prove the existence and uniqueness of a solution of the nonlocal impulsive boundary value problem for given feasible controls. We compute the first and second variations of the performance functional and use them to obtain various necessary second-order optimality conditions.  相似文献   

5.
The purpose of this paper is to derive some pointwise second-order necessary conditions for stochastic optimal controls in the general case that the control variable enters into both the drift and the diffusion terms. When the control region is convex, a pointwise second-order necessary condition for stochastic singular optimal controls in the classical sense is established; while when the control region is allowed to be nonconvex, we obtain a pointwise second-order necessary condition for stochastic singular optimal controls in the sense of Pontryagin-type maximum principle. It is found that, quite different from the first-order necessary conditions, the correction part of the solution to the second-order adjoint equation appears in the pointwise second-order necessary conditions whenever the diffusion term depends on the control variable, even if the control region is convex.  相似文献   

6.
We derive necessary second-order optimality conditions for discontinuous controls in optimal control problems of ordinary differential equations with initial-final state constraints and mixed state-control constraints of equality and inequality type. Under the assumption that the gradients withrespect to the control of active mixed constraints are linearly independent, the necessary conditions follows from a Pontryagin minimum in the problem. Together with sufficient second-order conditions [70], the necessary conditions of the present paper constitute a pair of no-gap conditions.  相似文献   

7.
We consider the optimization problem for a bilinear functional with respect to a linear phase system with a modularly constrained control. On the base of exact formulas for the functional increment we establish sufficient optimality conditions for extremal controls. These conditions are stated as inequalities for one-dimensional functions on a time interval. They supplement the maximum principle, keeping the implementation complexity at the same level. The optimization problem for a quadratic functional is reduced to the bilinear case with the help of the matrix conjugate function.  相似文献   

8.
The present paper is concerned with the study of controls which are singular in the sense of the maximum principle. We obtain necessary conditions for optimality of singular controls in systems governed by ordinary differential equations. A useful feature of the method considered here is that it can be applied to optimal control problems with distributed parameters.this research was supported in part by the National Science Foundation under Grant No. NSF-MCS-80-02337 at the University of Michigan.The author wishes to express his deep gratitude to Professor L. Cesari for his valuable guidance and constant encouragement during the preparation of this paper.  相似文献   

9.
Near-optimal controls are as important as optimal controls for both theory and applications. Meanwhile, using inhibitor to control harmful microorganisms and ensure maximum growth of beneficial microorganisms (target microorganisms) is a very interesting topic in the chemostat. Thus, in this paper, we consider a stochastic chemostat model with non-zero cost inhibiting in finite time. The near-optimal control problem was constructed by minimizing the number of harmful microorganisms and minimizing the cost of inhibitor. We find that the Hamiltonian function is key to estimate objective function, and according to the adjoint equation, we obtain some error estimations of the near-optimality. Finally, we establish sufficient and necessary conditions for stochastic near-optimal controls of this model and numerical simulations and some conclusions are given.  相似文献   

10.
The present paper studies the stochastic maximum principle in singular optimal control, where the state is governed by a stochastic differential equation with nonsmooth coefficients, allowing both classical control and singular control. The proof of the main result is based on the approximation of the initial problem, by a sequence of control problems with smooth coefficients. We, then apply Ekeland's variational principle for this approximating sequence of control problems, in order to establish necessary conditions satisfied by a sequence of near optimal controls. Finally, we prove the convergence of the scheme, using Krylov's inequality in the nondegenerate case and the Bouleau-Hirsch flow property in the degenerate one. The adjoint process obtained is given by means of distributional derivatives of the coefficients.  相似文献   

11.
In this paper we consider an optimal control problem posed over piecewise continuous controls and involving state-control (mixed) equality constraints. We provide an explicit derivation of second order necessary conditions simpler than others available in the literature, yielding a clear understanding of how to define a set of “differentially admissible variations” where a certain quadratic form is nonnegative.  相似文献   

12.
An optimal control problem, which includes restrictions on the controls and equality/inequality constraints on the terminal states, is formulated. Second-order necessary conditions of the accessory-problem type are obtained in the absence of normality conditions. It is shown that the necessary conditions generalize and simplify prior results due to Hestenes (Ref. 5) and Warga (Refs. 6 and 7).  相似文献   

13.
The control literature either presents sufficient conditions for global optimality (for example, the Hamilton-Jacobi-Bellman theorem) or, if concerned with local optimality, restricts attention to comparison controls which are local in theL -sense. In this paper, use is made of an exact expression for the change in cost due to a change in control, a natural extension of a result due to Weierstrass, to obtain sufficient conditions for a control to be a strong minimum (in the sense that comparison controls are merely required to be close in theL 1-sense).  相似文献   

14.
A monotonicity result is utilized to derive sufficient optimality conditions of considerable generality for an individual trajectory in control theory. The sufficiency theorem embodying these conditions generalizes those of Boltyanskii and Leitmann and is applied to a simple control system to which their sufficiency theorems are not applicable. Conditions on the state equations and state space are completely relaxed. The set of admissible controls is extended to the set of measurable controls and the integrand of the performance index has its membership extended to the class of bounded Borel-measurable functions. The decomposition of the state space is required to be onlyplain denumerable.  相似文献   

15.
We formulate sufficient conditions for the technical stability on given bounded and infinite time intervals and for the asymptotic technical stability of continuously controlled linear dynamical processes with distributed parameters. By using the comparison method and the method of Lagrange multipliers in combination with the Lyapunov direct method, we obtain criteria which define a set of controls providing the technical stability of the output process. We select the optimal control that realizes the least value of the norm corresponding to a given process. Institute of Mechanics, Ukrainian Academy of Sciences, Kiev. Translated from Ukrainskii Matematicheskii Zhurnal, Vol. 49, No. 10. pp. 1337–1344, October, 1997.  相似文献   

16.
In this paper, we present a method to obtain necessary conditions for optimality of singular controls in systems governed by partial differential equations (distributed-parameter systems). The method is based on the one developed earlier by the author for singular control problems described by ordinary differential equations. As applications, we consider conditions for optimality of singular controls in a Darboux-Goursat system and in control systems that describe chemical processes.This research was supported in part by the National Science Foundation under Grant No. NSF-MCS-80-02337 at the University of Michigan.The author wishes to express his deep gratitude to Professor L. Cesari for his valuable guidance and constant encouragement during the preparation of this paper.  相似文献   

17.
In this paper we discuss the necessary and sufficient conditions for near-optimal singular stochastic controls for the systems driven by a nonlinear stochastic differential equations (SDEs in short). The proof of our result is based on Ekeland’s variational principle and some delicate estimates of the state and adjoint processes. It is well known that optimal singular controls may fail to exist even in simple cases. This justifies the use of near-optimal singular controls, which exist under minimal conditions and are sufficient in most practical cases. Moreover, since there are many near-optimal singular controls, it is possible to choose suitable ones, that are convenient for implementation. This result is a generalization of Zhou’s stochastic maximum principle for near-optimality to singular control problem.  相似文献   

18.
An optimal control problem with linear dynamics is considered on a fixed time interval. The ends of the interval correspond to terminal spaces, and a finite-dimensional optimization problem is formulated on the Cartesian product of these spaces. Two components of the solution of this problem define the initial and terminal conditions for the controlled dynamics. The dynamics in the optimal control problem is treated as an equality constraint. The controls are assumed to be bounded in the norm of L2. A saddle-point method is proposed to solve the problem. The method is based on finding saddle points of the Lagrangian. The weak convergence of the method in controls and its strong convergence in state trajectories, dual trajectories, and terminal variables are proved.  相似文献   

19.
We consider the control problem for a system described by ordinary differential equations with linear controls. We present sufficient conditions for finding an exact solution of the control problem for a three-dimensional nilpotent system with a two-dimensional linear control in the form of programmed controls and feedback controls. We consider two examples of the computation of controls with the use of linear vector fields on the plane.  相似文献   

20.
We establish necessary and sufficient conditions of near-optimality for nonlinear systems governed by forward-backward stochastic differential equations with controlled jump processes (FBSDEJs in short). The set of controls under consideration is necessarily convex. The proof of our result is based on Ekeland’s variational principle and continuity in some sense of the state and adjoint processes with respect to the control variable. We prove that under an additional hypothesis, the near-maximum condition on the Hamiltonian function is a sufficient condition for near-optimality. At the end, as an application to finance, mean-variance portfolio selection mixed with a recursive utility optimization problem is given. Mokhtar Hafay  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号