首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper considers the numerical solution of two classes of optimal control problems, called Problem P1 and Problem P2 for easy identification.Problem P1 involves a functionalI subject to differential constraints and general boundary conditions. It consists of finding the statex(t), the controlu(t), and the parameter so that the functionalI is minimized, while the constraints and the boundary conditions are satisfied to a predetermined accuracy. Problem P2 extends Problem P1 to include nondifferential constraints to be satisfied everywhere along the interval of integration. Algorithms are developed for both Problem P1 and Problem P2.The approach taken is a sequence of two-phase cycles, composed of a gradient phase and a restoration phase. The gradient phase involves one iteration and is designed to decrease the value of the functional, while the constraints are satisfied to first order. The restoration phase involves one or more iterations and is designed to force constraint satisfaction to a predetermined accuracy, while the norm squared of the variations of the control, the parameter, and the missing components of the initial state is minimized.The principal property of both algorithms is that they produce a sequence of feasible suboptimal solutions: the functions obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the values of the functionalI corresponding to any two elements of the sequence are comparable.The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, while the stepsize of the restoration phase is obtained by a one-dimensional search on the constraint errorP. The gradient stepsize and the restoration stepsize are chosen so that the restoration phase preserves the descent property of the gradient phase. Therefore, the value of the functionalI at the end of any complete gradient-restoration cycle is smaller than the value of the same functional at the beginning of that cycle.The algorithms presented here differ from those of Refs. 1 and 2, in that it is not required that the state vector be given at the initial point. Instead, the initial conditions can be absolutely general. In analogy with Refs. 1 and 2, the present algorithms are capable of handling general final conditions; therefore, they are suited for the solution of optimal control problems with general boundary conditions. Their importance lies in the fact that many optimal control problems involve initial conditions of the type considered here.Six numerical examples are presented in order to illustrate the performance of the algorithms associated with Problem P1 and Problem P2. The numerical results show the feasibility as well as the convergence characteristics of these algorithms.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-76-3075. Partial support for S. Gonzalez was provided by CONACYT, Consejo Nacional de Ciencia y Tecnologia, Mexico City, Mexico.  相似文献   

2.
3.
This paper is concerned with necessary conditions for a general optimal control problem developed by Russak and Tan. It is shown that, in most cases, a further relation between the multipliers holds. This result is of interest in particular for the investigation of perturbations of the state constraint.  相似文献   

4.
This paper briefly reviews the literature on necessary optimality conditions for optimal control problems with state-variable inequality constraints. Then, it attempts to unify the treatment of linear optimal control problems with state-variable inequality constraints in the framework of continuous linear programming. The duality theory in this framework makes it possible to relate the adjoint variables arising in different formulations of a problem; these relationships are illustrated by the use of a simple example. This framework also allows more general problems and admits a simplex-like algorithm to solve these problems.This research was partially supported by Grant No. A4619 from the National Research Council of Canada to the first author. The first author also acknowledges the support provided by the Brookhaven National Laboratory, where he conducted his research.  相似文献   

5.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the gradient phase. It is shown that the Lagrange multipliers associated with the gradient phase not only solve the auxiliary minimization problem of the gradient phase, but are also endowed with a supplementary optimality property: they minimize the error in the optimality conditions, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to R. BellmanThis work was supported by the National Science Foundation, Grant No. ENG-79-18667.  相似文献   

6.
A method of region analysis is developed for solving a class of optimal control problems with one state and one control variable, including state and control constraints. The performance index is strictly convex with respect to the control variable, while this variable appears only linearly in the state equation. The convexity or linearity assumption of the performance index or the state equation with respect to the state variable is not required.The author would like to express his sincere gratitude to Prof. R. Klötzler, Prof. E. Zeidler, Prof. H. Schumann, Prof. J. Focke, and other colleagues of the Department of Mathematics, Karl Marx University, Leipzig, GDR, for their support during his stay in Leipzig.  相似文献   

7.
A computational algorithm for a class of time-lag optimal control problems involving control and terminal inequality constraints is presented. The convergence properties of the algorithm is also investigated. To test the algorithm, an example is solved.This work was partially supported by the Australian Research Grant Committee.  相似文献   

8.
Abtract Various methods have been proposed for the numerical solution of optimal control problems with bounded state variables. In this paper, a new method is put forward and compared with two other methods, one of which makes use of adjoint variables whereas the other does not. Some conclusions are drawn on the usefulness of the three methods involved.  相似文献   

9.
It is shown that, when the set of necessary conditions for an optimal control problem with state-variable inequality constraints given by Bryson, Denham, and Dreyfus is appropriately augmented, it is equivalent to the (different) set of conditions given by Jacobson, Lele, and Speyer. Relationships among the various multipliers are given.This work was done at NASA Ames Research Center, Moffett Field, California, under a National Research Council Associateship.  相似文献   

10.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the case of a quadratic functional subject to linear constraints is considered, and a conjugate-gradient algorithm is derived. Nominal functionsx(t),u(t), π satisfying all the differential equations and boundary conditions are assumed. Variations Δx(t), δu(t), Δπ are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the differential equations, the boundary conditions, and a quadratic constraint on the variations of the control and the parameter. Next, the more general case of a nonquadratic functional subject to nonlinear constraints is considered. The algorithm derived for the linear-quadratic case is employed with one modification: a restoration phase is inserted between any two successive conjugate-gradient phases. In the restoration phase, variations Δx(t), Δu(t), Δπ are determined by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Thus, a sequential conjugate-gradient-restoration algorithm is constructed in such a way that the differential equations and the boundary conditions are satisfied at the end of each complete conjugate-gradient-restoration cycle. Several numerical examples illustrating the theory of this paper are given in Part 2 (see Ref. 1). These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper. This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor A. Miele for stimulating discussions. Formerly, Graduate Studient in Aero-Astronautics, Department of Mechanical and Aerospace Engineering and Materials Science, Rice University, Houston, Texas.  相似文献   

11.
It is known that extremal arcs governed by inequality constraints of third order (constraint relations that must be differentiated three times to generate a control equation) cannot join an unconstrained arc, except in special cases. But a control problem is exhibited, for which every extremal includes a constrained arc of third order. The constrained arc joins the end of an infinite sequence of consecutive unconstrained arcs of finite total duration. Evidence (but not proof) is given that this phenomenon is typical, rather than exceptional. An analogous phenomenon is well known for optimal control problems with singular arcs of second order.  相似文献   

12.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the restoration phase. It is shown that the Lagrange multipliers associated with the restoration phase not only solve the auxiliary minimization problem of the restoration phase, but are also endowed with a supplementary optimality property: they minimize a special functional, quadratic in the multipliers, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to L. CesariThis work was supported by a grant of the National Science Foundation.  相似文献   

13.
This paper contains general transformation techniques useful to convert minimax problems of optimal control into the Mayer-Bolza problem of the calculus of variations [Problem (P)]. We consider two types of minimax problems: minimax problems of Type (Q), in which the minimax function depends on the state and does not depend on the control; and minimax problems of Type (R), in which the minimax function depends on both the state and the control. Both Problem (Q) and Problem (R) can be reduced to Problem (P).For Problem (Q), we exploit the analogy with a bounded-state problem in combination with a transformation of the Jacobson type. This requires the proper augmentation of the state vectorx(t), the control vectoru(t), and the parameter vector , as well as the proper augmentation of the constraining relations. As a result of the transformation, the unknown minimax value of the performance index becomes a component of the parameter vector being optimized.For Problem (R), we exploit the analogy with a bounded-control problem in combination with a transformation of the Valentine type. This requires the proper augmentation of the control vectoru(t) and the parameter vector , as well as the proper augmentation of the constraining relations. As a result of the transformation, the unknown minimax value of the performance index becomes a component of the parameter vector being optimized.In a subsequent paper (Part 2), the transformation techniques presented here are employed in conjunction with the sequential gradient-restoration algorithm for solving optimal control problems on a digital computer; both the single-subarc approach and the multiple-subarc approach are discussed.This research was supported by the National Science Foundation, Grant No. ENG-79-18667, and by Wright-Patterson Air Force Base, Contract No. F33615-80-C3000. This paper is a condensation of the investigations reported in Refs. 1–7. The authors are indebted to E. M. Coker and E. M. Sims for analytical and computational assistance.  相似文献   

14.
In a previous paper (Part 1), we presented general transformation techniques useful to convert minimax problems of optimal control into the Mayer-Bolza problem of the calculus of variations [Problem (P)]. We considered two types of minimax problems: minimax problems of Type (Q), in which the minimax function depends on the state and does not depend on the control; and minimax problems of Type (R), in which the minimax function depends on both the state and the control. Both Problem (Q) and Problem (R) can be reduced to Problem (P).In this paper, the transformation techniques presented in Part 1 are employed in conjunction with the sequential gradient-restoration algorithm for solving optimal control problems on a digital computer. Both the single-subarc approach and the multiple-subarc approach are employed. Three test problems characterized by known analytical solutions are solved numerically.It is found that the combination of transformation techniques and sequential gradient-restoration algorithm yields numerical solutions which are quite close to the analytical solutions from the point of view of the minimax performance index. The relative differences between the numerical values and the analytical values of the minimax performance index are of order 10–3 if the single-subarc approach is employed. These relative differences are of order 10–4 or better if the multiple-subarc approach is employed.This research was supported by the National Science Foundation, Grant No. ENG-79-18667, and by Wright-Patterson Air Force Base, Contract No. F33615-80-C3000. This paper is a condensation of the investigations reported in Refs. 1–7. The authors are indebted to E. M. Coker and E. M. Sims for analytical and computational assistance.  相似文献   

15.
A computational algorithm for solving a class of optimal control problems involving terminal and continuous state constraints of inequality type was developed in Ref. 1. In this paper, we extend the results of Ref. 1 to a more general class of constrained time-delayed optimal control problems, which involves terminal state equality constraints as well as terminal state inequality constraints and continuous state constraints. Two examples have been solved to illustrate the efficiency of the method.  相似文献   

16.
This paper considers the numerical solution of the problem of minimizing a functionalI, subject to differential constraints, nondifferential constraints, and general boundary conditions. It consists of finding the statex(t), the controlu(t), and the parameter so that the functionalI is minimized while the constraints are satisfied to a predetermined accuracy.The modified quasilinearization algorithm (MQA) is extended, so that it can be applied to the solution of optimal control problems with general boundary conditions, where the state is not explicitly given at the initial point.The algorithm presented here preserves the MQA descent property on the cumulative error. This error consists of the error in the optimality conditions and the error in the constraints.Three numerical examples are presented in order to illustrate the performance of the algorithm. The numerical results are discussed to show the feasibility as well as the convergence characteristics of the algorithm.This work was supported by the Electrical Research Institute of Mexico and by CONACYT, Consejo Nacional de Ciencia y Tecnologia, Mexico City, Mexico.  相似文献   

17.
The sufficient optimality conditions of Zeidan for optimal control problems (Refs. 1 and 2) are generalized such that they are applicable to problems with pure state-variable inequality constraints. We derive conditions which neither assume the concavity of the Hamiltonian nor the quasiconcavity of the constraints. Global as well as local optimality conditions are presented.  相似文献   

18.
This note presents an extension of the Miele—Cragg-Iyer-Levy augmented function method for finite-dimensional optimization problems to optimal control problems. A numerical study is provided.  相似文献   

19.
In this paper, we consider an optimal control problem of switched systems with input and state constraints. Since the complexity of such constraint and switching laws, it is difficult to solve the problem using standard optimization techniques. In addition, although conjugate gradient algorithms are very useful for solving nonlinear optimization problem, in practical implementations, the existing Wolfe condition may never be satisfied due to the existence of numerical errors. And the mode insertion technique only leads to suboptimal solutions, due to only certain mode insertions being considered. Thus, based on an improved conjugate gradient algorithm and a discrete filled function method, an improved bi-level algorithm is proposed to solve this optimization problem. Convergence results indicate that the proposed algorithm is globally convergent. Three numerical examples are solved to illustrate the proposed algorithm converges faster and yields a better cost function value than existing bi-level algorithms.  相似文献   

20.
We study a quasi-variational inequality system with unbounded solutions. It represents the Bellman equation associated with an optimal switching control problem with state constraints arising from production engineering. We show that the optimal cost is the unique viscosity solution of the system.This work was supported by the National Research Council of Argentina, Grant No. PID-BID 213.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号