首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Geometric methods for nonlinear optimal control problems   总被引:1,自引:0,他引:1  
It is the purpose of this paper to develop and present new approaches to optimal control problems for which the state evolution equation is nonlinear. For bilinear systems in which the evolution equation is right invariant, it is possible to use ideas from differential geometry and Lie theory to obtain explicit closed-form solutions.The author wishes to thank Professor A. Krener for many stimulating discussions and in particular for suggesting Theorem 3.3. Also, special thanks are due to the author's thesis advisor Professor R. W. Brockett under whose direction most of the research was done. Finally, the author thanks two anonymous referees for suggestions which have improved the exposition.  相似文献   

2.
We consider a Bolza optimal control problem with state constraints. It is well known that under some technical assumptions every strong local minimizer of this problem satisfies first order necessary optimality conditions in the form of a constrained maximum principle. In general, the maximum principle may be abnormal or even degenerate and so does not provide a sufficient information about optimal controls. In the recent literature some sufficient conditions were proposed to guarantee that at least one maximum principle is nondegenerate, cf. [A.V. Arutyanov, S.M. Aseev, Investigation of the degeneracy phenomenon of the maximum principle for optimal control problems with state constraints, SIAM J. Control Optim. 35 (1997) 930–952; F. Rampazzo, R.B. Vinter, A theorem on existence of neighbouring trajectories satisfying a state constraint, with applications to optimal control, IMA 16 (4) (1999) 335–351; F. Rampazzo, R.B. Vinter, Degenerate optimal control problems with state constraints, SIAM J. Control Optim. 39 (4) (2000) 989–1007]. Our aim is to show that actually conditions of a similar nature guarantee normality of every nondegenerate maximum principle. In particular we allow the initial condition to be fixed and the state constraints to be nonsmooth. To prove normality we use J. Yorke type linearization of control systems and show the existence of a solution to a linearized control system satisfying new state constraints defined, in turn, by linearization of the original set of constraints along an extremal trajectory.  相似文献   

3.
Necessary conditions in terms of a local minimum principle are derived for optimal control problems subject to index-2 differential-algebraic equations, pure state constraints, and mixed control-state constraints. Differential-algebraic equations are composite systems of differential equations and algebraic equations, which arise frequently in practical applications. The local minimum principle is based on the necessary optimality conditions for general infinite optimization problems. The special structure of the optimal control problem under consideration is exploited and allows us to obtain more regular representations for the multipliers involved. An additional Mangasarian-Fromowitz-like constraint qualification for the optimal control problem ensures the regularity of a local minimum. An illustrative example completes the article.The author thanks the referees for careful reading and helpful suggestions and comments.  相似文献   

4.
It is well-known in optimal control theory that the maximum principle, in general, furnishes only necessary optimality conditions for an admissible process to be an optimal one. It is also well-known that if a process satisfies the maximum principle in a problem with convex data, the maximum principle turns to be likewise a sufficient condition. Here an invexity type condition for state constrained optimal control problems is defined and shown to be a sufficient optimality condition. Further, it is demonstrated that all optimal control problems where all extremal processes are optimal necessarily obey this invexity condition. Thus optimal control problems which satisfy such a condition constitute the most general class of problems where the maximum principle becomes automatically a set of sufficient optimality conditions.  相似文献   

5.
A dynamical system is assumed to be governed by a set of ordinary differential equations subject to control. The set of points in state space from which there exist permissible controls that can transfer these points to a prescribed target set in a finite time interval is called a capture set. The task of determining the capture set is studied in two contexts. first, in the case of the system subject to a single control vector; and second, in the case of the system subject to two control vectors each operated independently. In the latter case, it is assumed that one controller's aim is to cause the system to attain the target, and the other's is to prevent that from occurring.Sufficient conditions are developed that, when satisfied everywhere on the interior of some subset of the state space, ensure that this subset is truly a capture set. A candidate capture set is assumed to have already been predetermined by independent methods. The sufficient conditions developed herein require the use of an auxiliary scalar function of the state, similar to a Lyapunov function.To ensure capture, five conditions must be satisfied. Four of these constrain the auxiliary state function. Basically, these four conditions require that the boundary of the controllable set be an envelope of the auxiliary state function and that that function be positive inside the capture set, approaching zero value as the target set is approached. The final condition tests the inner product of the gradient of the auxiliary state function with the system state velocity vector. If the sign of that inner product can be made negative everywhere within the test subset, then that subset is a capture set.Dedicated to Professor A. BusemannThe authors are indebted to Professors G. Leitmann and J. M. Skowronskii for their useful comments and discussion.  相似文献   

6.
An optimal control problem with a prescribed performance index for parabolic systems with time delays is investigated. A necessary condition for optimality is formulated and proved in the form of a maximum principle. Under additional conditions, the maximum principle gives sufficient conditions for optimality. It is also shown that the optimal control is unique. As an illustration of the theoretical consideration, an analytic solution is obtained for a time-delayed diffusion system.The author wishes to express his deep gratitude to Professors J. M. Sloss and S. Adali for the valuable guidance and constant encouragement during the preparation of this paper.  相似文献   

7.
The main result in this short note is that the integral form of the Leitmann-Stalford sufficiency conditions can be verified for a class of optimal control problems whose Hamiltonian is not concave with respect to the state variable. The main requirement for this class of problems is that the dynamics is sufficiently dissipative. An application to a Stackelberg differential game between a producer and a developer is exemplified. Using our result we show that the necessary conditions implied by Pontryagin’s maximum principle are also sufficient. This allows a complete characterization of the solution.  相似文献   

8.
An integral maximum principle is developed for a class of nonlinear systems containing time delays in state and control variables. Its proof is based on the theory of quasiconvex families of functions, originally developed by Gamkrelidze and extended by Banks. This result is used to obtain a pointwise principle of the Pontryagin type.The authors wish to acknowledge Professor J. M. Blatt for suggesting this problem. Further, they also wish to acknowledge the referee of the paper for bringing to their attention the problems discussed in Section 6.  相似文献   

9.
We prove the Kuhn-Tucker sufficient optimality condition, the Wolfe duality, and a modified Mond-Weir duality for vector optimization problems involving various types of invex-convexlike functions. The class of such functins contains many known generalized convex functions. As applications, we demonstrate that, under invex-convexlikeness assumptions, the Pontryagin maximum principle is a sufficient optimality condition for cooperative differential games. The Wolfe duality is established for these games.The author is indebted to the referees and Professor W. Stadler for valuable remarks and comments, which have been used to revise considerably the paper.  相似文献   

10.
11.
作者研究了一个条件平均场随机微分方程的最优控制问题.这种方程和某些部分信息下的随机最优控制问题有关,并且可以看做是平均场随机微分方程的推广.作者以庞特里雅金最大值原理的形式给出最优控制满足的必要和充分条件.此外,文中给出一个线性二次最优控制问题来说明理论结果的应用.  相似文献   

12.
A class of systems governed by quasilinear parabolic partial differential equations with first boundary conditions is considered. Existence of solutions for this class of systems and theira priori estimates are established. Further, a theorem on the existence of optimal controls for the corresponding control problem is obtained. Its proof is based on Filippov's implicit functions lemma. The control restraint setU is taken as a measurable multifunction.The authors wish to thank Professor L. Cesari for his most valuable comments and suggestions. In fact, a condition assumed in the original version of this paper was substantially relaxed by him. For details, see Remark 4.1.  相似文献   

13.
In this paper,we consider an optimal control problem with state constraints,where the control system is described by a mean-field forward-backward stochastic differential equation(MFFBSDE,for short)and the admissible control is mean-field type.Making full use of the backward stochastic differential equation theory,we transform the original control system into an equivalent backward form,i.e.,the equations in the control system are all backward.In addition,Ekeland’s variational principle helps us deal with the state constraints so that we get a stochastic maximum principle which characterizes the necessary condition of the optimal control.We also study a stochastic linear quadratic control problem with state constraints.  相似文献   

14.
Necessary conditions are derived for optimal control problems subject to index-2 differential-algebraic equations, pure state constraints, and mixed control-state constraints. Differential-algebraic equations are composite systems of differential equations and algebraic equations, which arise frequently in practical applications. The structure of the optimal control problem under consideration is exploited and special emphasis is laid on the representation of the Lagrange multipliers resulting from the necessary conditions for infinite optimization problems.The author thanks the referees for careful reading and helpful suggestions and comments.  相似文献   

15.
The aim of this paper is to show that the simple gradient method is efficient when applied to the optimal control of a distributed parameter system. The system is a model of a biological membrane (with enzymes), and the problem is to approach a desired flux of substrate entering the membrane by acting on an inhibitor's concentration at the boundary of the membrane.This paper was presented at the 4th IFIP Symposium, Los Angeles, California, 1971.The author thanks Professor J. L. Lions for his guidance and supervision in this work. He also thanks Messieurs R. Glowinsky, M. Nedelec, L. Tartar, and J. P. Yvon for a number of very helpful discussions on the subject of this paper. This work was done in collaboration with the Laboratory of Medical Biochemistry, Charles Nicolle Hospital, Rouen, France. The author is much indebted to Dr. D. Thomas who suggested this problem.  相似文献   

16.
In this paper we develop the necessary conditions of optimality for a class of distributed parameter systems (partial differential equations) determined by operator valued measures and controlled by vector measures. Based on some recent results on existence of optimal controls from the space of vector measures, we develop necessary conditions of optimality for a class of control problems. The main results are the necessary conditions of optimality for problems without state constraints and those with state constraints. Also, a conceptual algorithm along with a brief discussion of its convergence is presented.  相似文献   

17.
We consider an optimal control problem under state constraints and show that to every optimal solution corresponds an adjoint state satisfying the first order necessary optimality conditions in the form of a maximum principle and sensitivity relations involving the value function. Such sensitivity relations were recently investigated by P. Bettiol and R.B. Vinter for state constraints with smooth boundary. In the difference with their work, our setting concerns differential inclusions and nonsmooth state constraints. To obtain our result we derive neighboring feasible trajectory estimates using a novel generalization of the so-called inward pointing condition.  相似文献   

18.
Near-optimization is as sensible and important as optimization for both theory and applications. This paper deals with necessary and sufficient conditions for near-optimal singular stochastic controls for nonlinear controlled stochastic differential equations of mean-field type, which is also called McKean–Vlasov-type equations. The proof of our main result is based on Ekeland’s variational principle and some estimates of the state and adjoint processes. It is shown that optimal singular control may fail to exist even in simple cases, while near-optimal singular controls always exist. This justifies the use of near-optimal stochastic controls, which exist under minimal hypotheses and are sufficient in most practical cases. Moreover, since there are many near-optimal singular controls, it is possible to select among them appropriate ones that are easier for analysis and implementation. Under an additional assumptions, we prove that the near-maximum condition on the Hamiltonian function is a sufficient condition for near-optimality. This paper extends the results obtained in (Zhou, X.Y.: SIAM J. Control Optim. 36(3), 929–947, 1998) to a class of singular stochastic control problems involving stochastic differential equations of mean-field type. An example is given to illustrate the theoretical results.  相似文献   

19.
On the basis of the results of the first part of the paper, we consider necessary conditions for minimizing sequences in an optimal control problem with a pointwise state constraint of inequality type and with dynamics described by a linear hyperbolic equation in divergence form with the homogeneous Dirichlet boundary condition. The state constraint contains a function parameter that belongs to the class of continuous functions and occurs as an additive term. For the parametric optimization problem, we also consider regularity and normality conditions stipulated by the differential properties of its value function.  相似文献   

20.
This paper is concerned with the analysis of a control problem related to the optimal management of a bioreactor. This real-world problem is formulated as a state-control constrained optimal control problem. We analyze the state system (a complex system of partial differential equations modelling the eutrophication processes for non-smooth velocities), and we prove that the control problem admits, at least, a solution. Finally, we present a detailed derivation of a first order optimality condition - involving a suitable adjoint system - in order to characterize these optimal solutions, and some computational results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号