首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, nondifferential constraints, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter π so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy. The approach taken is a sequence of two-phase processes or cycles, composed of a gradient phase and a restoration phase. The gradient phase involves a single iteration and is designed to decrease the functional, while the constraints are satisfied to first order. The restoration phase involves one or several iterations and is designed to restore the constraints to a predetermined accuracy, while the norm of the variations of the control and the parameter is minimized. The principal property of the algorithm is that it produces a sequence of feasible suboptimal solutions: the functionsx(t),u(t), π obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the functionals of any two elements of the sequence are comparable. The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, and the stepsize of the restoration phase by a one-dimensional search on the constraint errorP. If α g is the gradient stepsize and α r is the restoration stepsize, the gradient corrections are ofO g ) and the restoration corrections are ofO r α g 2). Therefore, for α g sufficiently small, the restoration phase preserves the descent property of the gradient phase: the functionalÎ at the end of any complete gradient-restoration cycle is smaller than the functionalI at the beginning of the cycle. To facilitate the numerical solution on digital computers, the actual time ? is replaced by the normalized timet, defined in such a way that the extremal arc has a normalized time length Δt=1. In this way, variable-time terminal conditions are transformed into fixed-time terminal conditions. The actual time τ at which the terminal boundary is reached is regarded to be a component of the parameter π being optimized. The present general formulation differs from that of Ref. 4 because of the inclusion of the nondifferential constraints to be satisfied everywhere over the interval 0 ≤t ≤ 1. Its importance lies in that (i) many optimization problems arise directly in the form considered here, (ii) problems involving state equality constraints can be reduced to the present scheme through suitable transformations, and (iii) problems involving inequality constraints can be reduced to the present scheme through suitable transformations. The latter statement applies, for instance, to the following situations: (a) problems with bounded control, (b) problems with bounded state, (c) problems with bounded time rate of change of the state, and (d) problems where some bound is imposed on an arbitrarily prescribed function of the parameter, the control, the state, and the time rate of change of the state. Numerical examples are presented for both the fixed-final-time case and the free-final-time case. These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.  相似文献   

2.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the case of a quadratic functional subject to linear constraints is considered, and a conjugate-gradient algorithm is derived. Nominal functionsx(t),u(t), π satisfying all the differential equations and boundary conditions are assumed. Variations Δx(t), δu(t), Δπ are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the differential equations, the boundary conditions, and a quadratic constraint on the variations of the control and the parameter. Next, the more general case of a nonquadratic functional subject to nonlinear constraints is considered. The algorithm derived for the linear-quadratic case is employed with one modification: a restoration phase is inserted between any two successive conjugate-gradient phases. In the restoration phase, variations Δx(t), Δu(t), Δπ are determined by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Thus, a sequential conjugate-gradient-restoration algorithm is constructed in such a way that the differential equations and the boundary conditions are satisfied at the end of each complete conjugate-gradient-restoration cycle. Several numerical examples illustrating the theory of this paper are given in Part 2 (see Ref. 1). These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper. This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor A. Miele for stimulating discussions. Formerly, Graduate Studient in Aero-Astronautics, Department of Mechanical and Aerospace Engineering and Materials Science, Rice University, Houston, Texas.  相似文献   

3.
This paper considers the numerical solution of the problem of minimizing a functionalI, subject to differential constraints, nondifferential constraints, and general boundary conditions. It consists of finding the statex(t), the controlu(t), and the parameter so that the functionalI is minimized while the constraints are satisfied to a predetermined accuracy.The modified quasilinearization algorithm (MQA) is extended, so that it can be applied to the solution of optimal control problems with general boundary conditions, where the state is not explicitly given at the initial point.The algorithm presented here preserves the MQA descent property on the cumulative error. This error consists of the error in the optimality conditions and the error in the constraints.Three numerical examples are presented in order to illustrate the performance of the algorithm. The numerical results are discussed to show the feasibility as well as the convergence characteristics of the algorithm.This work was supported by the Electrical Research Institute of Mexico and by CONACYT, Consejo Nacional de Ciencia y Tecnologia, Mexico City, Mexico.  相似文献   

4.
This paper contains general transformation techniques useful to convert minimax problems of optimal control into the Mayer-Bolza problem of the calculus of variations [Problem (P)]. We consider two types of minimax problems: minimax problems of Type (Q), in which the minimax function depends on the state and does not depend on the control; and minimax problems of Type (R), in which the minimax function depends on both the state and the control. Both Problem (Q) and Problem (R) can be reduced to Problem (P).For Problem (Q), we exploit the analogy with a bounded-state problem in combination with a transformation of the Jacobson type. This requires the proper augmentation of the state vectorx(t), the control vectoru(t), and the parameter vector , as well as the proper augmentation of the constraining relations. As a result of the transformation, the unknown minimax value of the performance index becomes a component of the parameter vector being optimized.For Problem (R), we exploit the analogy with a bounded-control problem in combination with a transformation of the Valentine type. This requires the proper augmentation of the control vectoru(t) and the parameter vector , as well as the proper augmentation of the constraining relations. As a result of the transformation, the unknown minimax value of the performance index becomes a component of the parameter vector being optimized.In a subsequent paper (Part 2), the transformation techniques presented here are employed in conjunction with the sequential gradient-restoration algorithm for solving optimal control problems on a digital computer; both the single-subarc approach and the multiple-subarc approach are discussed.This research was supported by the National Science Foundation, Grant No. ENG-79-18667, and by Wright-Patterson Air Force Base, Contract No. F33615-80-C3000. This paper is a condensation of the investigations reported in Refs. 1–7. The authors are indebted to E. M. Coker and E. M. Sims for analytical and computational assistance.  相似文献   

5.
A convergence analysis is presented for a general class of derivative-free algorithms for minimizing a functionf(x) for which the analytic form of the gradient and the Hessian is impractical to obtain. The class of algorithms accepts finite-difference approximation to the gradient, with stepsizes chosen in such a way that the length of the stepsize must meet two conditions involving the previous stepsize and the distance from the last estimate of the solution to the current estimate. The algorithms also maintain an approximation to the second-derivative matrix and require that the change inx made at each iteration be subject to a bound that is also revised automatically. The convergence theorems have the features that the starting pointx 1 need not be close to the true solution andf(x) need not be convex. Furthermore, despite the fact that the second-derivative approximation may not converge to the true Hessian at the solution, the rate of convergence is still Q-superlinear. The theorry is also shown to be applicable to a modification of Powell's dog-leg algorithm.  相似文献   

6.
In a previous paper (Part 1), we presented general transformation techniques useful to convert minimax problems of optimal control into the Mayer-Bolza problem of the calculus of variations [Problem (P)]. We considered two types of minimax problems: minimax problems of Type (Q), in which the minimax function depends on the state and does not depend on the control; and minimax problems of Type (R), in which the minimax function depends on both the state and the control. Both Problem (Q) and Problem (R) can be reduced to Problem (P).In this paper, the transformation techniques presented in Part 1 are employed in conjunction with the sequential gradient-restoration algorithm for solving optimal control problems on a digital computer. Both the single-subarc approach and the multiple-subarc approach are employed. Three test problems characterized by known analytical solutions are solved numerically.It is found that the combination of transformation techniques and sequential gradient-restoration algorithm yields numerical solutions which are quite close to the analytical solutions from the point of view of the minimax performance index. The relative differences between the numerical values and the analytical values of the minimax performance index are of order 10–3 if the single-subarc approach is employed. These relative differences are of order 10–4 or better if the multiple-subarc approach is employed.This research was supported by the National Science Foundation, Grant No. ENG-79-18667, and by Wright-Patterson Air Force Base, Contract No. F33615-80-C3000. This paper is a condensation of the investigations reported in Refs. 1–7. The authors are indebted to E. M. Coker and E. M. Sims for analytical and computational assistance.  相似文献   

7.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the restoration phase. It is shown that the Lagrange multipliers associated with the restoration phase not only solve the auxiliary minimization problem of the restoration phase, but are also endowed with a supplementary optimality property: they minimize a special functional, quadratic in the multipliers, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to L. CesariThis work was supported by a grant of the National Science Foundation.  相似文献   

8.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the gradient phase. It is shown that the Lagrange multipliers associated with the gradient phase not only solve the auxiliary minimization problem of the gradient phase, but are also endowed with a supplementary optimality property: they minimize the error in the optimality conditions, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to R. BellmanThis work was supported by the National Science Foundation, Grant No. ENG-79-18667.  相似文献   

9.
This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, a state inequality constraint, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy.The approach taken is a sequence of two-phase processes or cycles, composed of a gradient phase and a restoration phase. The gradient phase involves a single iteration and is designed to decrease the functional, while the constraints are satisfied to first order. The restoration phase involves one or several iterations and is designed to restore the constraints to a predetermined accuracy, while the norm of the variations of the control and the parameter is minimized. The principal property of the algorithm is that it produces a sequence of feasible suboptimal solutions: the functionsx(t),u(t), obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the functionals of any two elements of the sequence are comparable.Here, the state inequality constraint is handled in a direct manner. A predetermined number and sequence of subarcs is assumed and, for the time interval for which the trajectory of the system lies on the state boundary, the control is determined so that the state boundary is satisfied. The state boundary and the entrance conditions are assumed to be linear inx and , and the sequential gradient-restoration algorithm is constructed in such a way that the state inequality constraint is satisfied at each iteration of the gradient phase and the restoration phase along all of the subarcs composing the trajectory.At first glance, the assumed linearity of the state boundary and the entrance conditions appears to be a limitation to the theory. Actually, this is not the case. The reason is that every constrained minimization problem can be brought to the present form through the introduction of additional state variables.To facilitate the numerical solution on digital computers, the actual time is replaced by the normalized timet, defined in such a way that each of the subarcs composing the extremal arc has a normalized time length t=1. In this way, variable-time corner conditions and variable-time terminal conditions are transformed into fixed-time corner conditions and fixed-time terminal conditions. The actual times 1, 2, at which (i) the state boundary is entered, (ii) the state boundary is exited, and (iii) the terminal boundary is reached are regarded to be components of the parameter being optimized.The numerical examples illustrating the theory demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.This paper is based in part on a portion of the dissertation which the first author submitted in partial fulfillment of the requirements for the PhD Degree at the Air Force Institute of Technology, Wright-Patterson AFB, Ohio. This research was supported in part by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor H. Y. Huang, Dr. R. R. Iyer, Dr. J. N. Damoulakis, Mr. A. Esterle, and Mr. J. R. Cloutier for helpful discussions as well as analytical and numerical assistance. This paper is a condensation of the investigations reported in Refs. 1–2.  相似文献   

10.
In this paper, the problem of minimizing a nonlinear functionf(x) subject to a nonlinear constraint (x)=0 is considered, wheref is a scalar,x is ann-vector, and is aq-vector, withq<n. A conjugate gradient-restoration algorithm similar to those developed by Mieleet al. (Refs. 1 and 2) is employed. This particular algorithm consists of a sequence of conjugate gradient-restoration cycles. The conjugate gradient portion of each cycle is based upon a conjugate gradient algorithm that is derived for the special case of a quadratic function subject to linear constraints. This portion of the cycle involves a single step and is designed to decrease the value of the function while satisfying the constraints to first order. The restoration portion of each cycle involves one or more iterations and is designed to restore the norm of the constraint function to within a predetermined tolerance about zero.The conjugate gradient-restoration sequence is reinitialized with a simple gradient step everyn–q or less cycles. At the beginning of each simple gradient step, a positive-definite preconditioning matrix is used to accelerate the convergence of the algorithm. The preconditioner chosen,H +, is the positive-definite reflection of the Hessian matrixH. The matrixH + is defined herein to be a matrix whose eigenvectors are identical to those of the Hessian and whose eigenvalues are the moduli of the latter's eigenvalues. A singular-value decomposition is used to efficiently construct this matrix. The selection of the matrixH + as the preconditioner is motivated by the fact that gradient algorithms exhibit excellent convergence characteristics on quadratic problems whose Hessians have small condition numbers. To this end, the transforming operatorH + 1/2 produces a transformed Hessian with a condition number of one.A higher-order example, which has resulted from a new eigenstructure assignment formulation (Ref. 3), is used to illustrate the rapidity of convergence of the algorithm, along with two simpler examples.  相似文献   

11.
This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, nondifferential constraints, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter π so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy. A modified quasilinearization algorithm is developed. Its main property is the descent property in the performance indexR, the cumulative error in the constraints and the optimality conditions. Modified quasilinearization differs from ordinary quasilinearization because of the inclusion of the scaling factor (or stepsize) α in the system of variations. The stepsize is determined by a one-dimensional search on the performance indexR. Since the first variation δR is negative, the decrease inR is guaranteed if α is sufficiently small. Convergence to the solution is achieved whenR becomes smaller than some preselected value. In order to start the algorithm, some nominal functionsx(t),u(t), π and nominal multipliers λ(t), ρ(t), μ must be chosen. In a real problem, the selection of the nominal functions can be made on the basis of physical considerations. Concerning the nominal multipliers, no useful guidelines have been available thus far. In this paper, an auxiliary minimization algorithm for selecting the multipliers optimally is presented: the performance indexR is minimized with respect to λ(t), ρ(t), μ. Since the functionalR is quadratically dependent on the multipliers, the resulting variational problem is governed by optimality conditions which are linear and, therefore, can be solved without difficulty. To facilitate the numerical solution on digital computers, the actual time θ is replaced by the normalized timet, defined in such a way that the extremal arc has a normalized time length Δt=1. In this way, variable-time terminal conditions are transformed into fixed-time terminal conditions. The actual time τ at which the terminal boundary is reached is regarded to be a component of the parameter π being optimized. The present general formulation differs from that of Ref. 3 because of the inclusion of the nondifferential constraints to be satisfied everywhere over the interval 0?t?1. Its importance lies in that (i) many optimization problems arise directly in the form considered here, (ii) there are problems involving state equality constraints which can be reduced to the present scheme through suitable transformations, and (iii) there are some problems involving inequality constraints which can be reduced to the present scheme through the introduction of auxiliary variables. Numerical examples are presented for the free-final-time case. These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.  相似文献   

12.
Rapid progresses in information and computer technology allow the development of more advanced optimal control algorithms dealing with real-world problems. In this paper, which is Part 1 of a two-part sequence, a multiple-subarc gradient-restoration algorithm (MSGRA) is developed. We note that the original version of the sequential gradient-restoration algorithm (SGRA) was developed by Miele et al. in single-subarc form (SSGRA) during the years 1968–86; it has been applied successfully to solve a large number of optimal control problems of atmospheric and space flight.MSGRA is an extension of SSGRA, the single-subarc gradient-restoration algorithm. The primary reason for MSGRA is to enhance the robustness of gradient-restoration algorithms and also to enlarge the field of applications. Indeed, MSGRA can be applied to optimal control problems involving multiple subsystems as well as discontinuities in the state and control variables at the interface between contiguous subsystems.Two features of MSGRA are increased automation and efficiency. The automation of MSGRA is enhanced via time normalization: the actual time domain is mapped into a normalized time domain such that the normalized time length of each subarc is 1. The efficiency of MSGRA is enhanced by using the method of particular solutions to solve the multipoint boundary-value problems associated with the gradient phase and the restoration phase of the algorithm.In a companion paper [Part 2 (Ref. 2)], MSGRA is applied to compute the optimal trajectory for a multistage launch vehicle design, specifically, a rocket-powered spacecraft ascending from the Earth surface to a low Earth orbit (LEO). Single-stage, double-stage, and triple-stage configurations are considered and compared.  相似文献   

13.
Two existing function-space quasi-Newton algorithms, the Davidon algorithm and the projected gradient algorithm, are modified so that they may handle directly control-variable inequality constraints. A third quasi-Newton-type algorithm, developed by Broyden, is extended to optimal control problems. The Broyden algorithm is further modified so that it may handle directly control-variable inequality constraints. From a computational viewpoint, dyadic operator implementation of quasi-Newton methods is shown to be superior to the integral kernel representation. The quasi-Newton methods, along with the steepest descent method and two conjugate gradient algorithms, are simulated on three relatively simple (yet representative) bounded control problems, two of which possess singular subarcs. Overall, the Broyden algorithm was found to be superior. The most notable result of the simulations was the clear superiority of the Broyden and Davidon algorithms in producing a sharp singular control subarc.This research was supported by the National Science Foundation under Grant Nos. GK-30115 and ENG 74-21618 and by the National Aeronautics and Space Administration under Contract No. NAS 9-12872.  相似文献   

14.
This paper is concerned with optimal flight trajectories in the presence of windshear. With particular reference to take-off, eight fundamental optimization problems [Problems (P1)–(P8)] are formulated under the assumptions that the power setting is held at the maximum value and that the airplane is controlled through the angle of attack.Problems (P1)–(P3) are least-square problems of the Bolza type. Problems (P4)–(P8) are minimax problems of the Chebyshev type, which can be converted into Bolza problems through suitable transformations. These problems are solved employing the dual sequential gradient-restoration algorithm (DSGRA) for optimal control problems.Numerical results are obtained for a large number of combinations of performance indexes, boundary conditions, windshear models, and windshear intensities. However, for the sake of brevity, the presentation of this paper is restricted to Problem (P6), minimax h, and Problem (P7), minimax . Inequality constraints are imposed on the angle of attack and the time derivative of the angle of attack.The following conclusions are reached: (i) optimal trajectories are considerably superior to constant-angle-of-attack trajectories; (ii) optimal trajectories achieve minimum velocity at about the time when the windshear ends; (iii) optimal trajectories can be found which transfer an aircraft from a quasi-steady condition to a quasi-steady condition through a windshear; (iv) as the boundary conditions are relaxed, a higher final altitude can be achieved, albeit at the expense of a considerable velocity loss; (v) among the optimal trajectories investigated, those solving Problem (P7) are to be preferred, because the altitude distribution exhibits a monotonic behavior; in addition, for boundary conditions BC2 and BC3, the peak angle of attack is below the maximum permissible value; (vi) moderate windshears and relatively severe windshears are survivable employing an optimized flight strategy; however, extremely severe windshears are not survivable, even employing an optimized flight strategy; and (vii) the sequential gradient-restoration algorithm (SGRA), employed in its dual form (DSGRA), has proven to be a powerful algorithm for solving the problem of the optimal flight trajectories in a windshear.Portions of this paper were presented at the AIAA Atmospheric Flight Mechanics Conference, Snowmass, Colorado, August 19–21, 1985. The authors are indebted to Boeing Commercial Aircraft Company, Seattle, Washington and to Pratt and Whitney Aircraft, East Hartford, Connecticut for supplying some of the technical data pertaining to this study.This research was supported by NASA-Langley Research Center, Grant No. NAG-1-516. The authors are indebted to Dr. R. L. Bowles, NASA-Langley Research Center, Hampton, Virginia, for helpful discussions.This paper is based in part on Refs. 1–5.  相似文献   

15.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

16.
This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, a state inequality constraint, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy.A modified quasilinearization algorithm is developed. Its main property is the descent property in the performance indexR, the cumulative error in the constraints and the optimality conditions. Modified quasilinearization differs from ordinary quasilinearization because of the inclusion of the scaling factor (or stepsize) in the system of variations. The stepsize is determined by a one-dimensional search on the performance indexR. Since the first variation R is negative, the decrease inR is guaranteed if is sufficiently small. Convergence to the solution is achieved whenR becomes smaller than some preselected value.Here, the state inequality constraint is handled in a direct manner. A predetermined number and sequence of subarcs is assumed and, for the time interval for which the trajectory of the system lies on the state boundary, the control is determined so that the state boundary is satisfied. The state boundary and the entrance conditions are assumed to be linear inx and , and the modified quasilinearization algorithm is constructed in such a way that the state inequality constraint is satisfied at each iteration and along all of the subarcs composing the trajectory.At first glance, the assumed linearity of the state boundary and the entrance conditions appears to be a limitation to the theory. Actually, this is not the case. The reason is that every constrained minimization problem can be brought to the present form through the introduction of additional state variables.In order to start the algorithm, some nominal functionsx(t),u(t), and nominal multipliers (t), (t), , must be chosen. In a real problem, the selection of the nominal functions can be made on the basis of physical considerations. Concerning the nominal multipliers, no useful guidelines have been available thus far. In this paper, an auxiliary minimization algorithm for selecting the multipliers optimally is presented: the performance indexR is minimized with respect to (t), (t), , . Since the functionalR is quadratically dependent on the multipliers, the resulting variational problem is governed by optimality conditions which are linear and, therefore, can be solved without difficulty.The numerical examples illustrating the theory demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Dr. R. R. Iyer and Mr. A. K. Aggarwal for helpful discussions as well as analytical and numerical assistance. This paper is a condensation of the investigations described in Refs. 1–2.  相似文献   

17.
18.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter . Here,I is a scalar,x ann-vector,u anm-vector, and ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations.Four types of gradient-restoration algorithms are considered, and their relative efficiency (in terms of number of iterations for convergence) is evaluated. The algorithms being considered are as follows: sequential gradient-restoration algorithm, complete restoration (SGRA-CR), sequential gradient-restoration algorithm, incomplete restoration (SGRA-IR), combined gradient-restoration algorithm, no restoration (CGRA-NR), and combined gradient-restoration algorithm, incomplete restoration (CGRA-IR).Evaluation of these algorithms is accomplished through six numerical examples. The results indicate that (i) the inclusion of a restoration phase is necessary for rapid convergence and (ii) while SGRA-CR is the most desirable algorithm if feasibility of the suboptimal solutions is required, rapidity of convergence to the optimal solution can be increased if one employs algorithms with incomplete restoration, in particular, CGRA-IR.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185.  相似文献   

19.
The first part of this paper considers a system described byp algebraic or transcendental equations involvingn variables, withn>p. A nominal state, not satisfying all the equations, is given. An iterative procedure is developed leading to a varied state satisfying all the equations. The procedure involves quasilinearization with an added optimality condition, namely, the requirement of least-square change of the coordinates. Two examples illustrating the rapid convergence of the algorithm are supplied.The second part considers a system described byn first-order differential equations involvingn state variables andm control variables. A nominal state and a nominal control, consistent with the boundary conditions, but not satisfying the equations, are given. An iterative procedure is developed leading to a varied state and a varied control consistent with the boundary conditions and the equations. The procedure involves quasilinearization with an added optimality condition, namely, the requirement of least-square change of the control and the state. Two examples illustrating the rapid convergence of the algorithm are supplied.The above procedures can be included in some of the iterative algorithms for minimizing functions or functionals involving variables subject to constraints, namely, gradient methods, whether ordinary, accelerated, or conjugate. Each gradient phase is alternated with a buffer phase, the restoration phase described here.This research, supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67, and by the NASA-Manned Spacecraft Center, Grant No. NGR-44-006-089, is a condensation of the investigations described in Refs. 1 and 2. Portions of this paper were presented by the senior author at the Second Hawaii International Conference on System Sciences, Honolulu, Hawaii, January 22–24, 1969. The authors are indebted to Professor H. Y. Huang and Mr. R. R. Iyer for helpful discussions.  相似文献   

20.
Recent advances in gradient algorithms for optimal control problems   总被引:1,自引:0,他引:1  
This paper summarizes recent advances in the area of gradient algorithms for optimal control problems, with particular emphasis on the work performed by the staff of the Aero-Astronautics Group of Rice University. The following basic problem is considered: minimize a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter π are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the sequential gradient-restoration algorithm and the combined gradient-restoration algorithm are presented. The descent properties of these algorithms are studied, and schemes to determine the optimum stepsize are discussed. Both of the above algorithms require the solution of a linear, two-point boundary-value problem at each iteration. Hence, a discussion of integration techniques is given. Next, a family of gradient-restoration algorithms is introduced. Not only does this family include the previous two algorithms as particular cases, but it allows one to generate several additional algorithms, namely, those with alternate restoration and optional restoration. Then, two modifications of the sequential gradient-restoration algorithm are presented in an effort to accelerate terminal convergence. In the first modification, the quadratic constraint imposed on the variations of the control is modified by the inclusion of a positive-definite weighting matrix (the matrix of the second derivatives of the Hamiltonian with respect to the control). The second modification is a conjugate-gradient extension of the sequential gradient-restoration algorithm. Next, the addition of a nondifferential constraint, to be satisfied everywhere along the interval of integration, is considered. In theory, this seems to be only a minor modification of the basic problem. In practice, the change is considerable in that it enlarges dramatically the number and variety of problems of optimal control which can be treated by gradient-restoration algorithms. Indeed, by suitable transformations, almost every known problem of optimal control theory can be brought into this scheme. This statement applies, for instance, to the following situations: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with equality constraints on the time rate of change of the state, (iv) problems with control inequality constraints, (v) problems with state inequality constraints, and (vi) problems with inequality constraints on the time rate of change of the state. Finally, the simultaneous presence of nondifferential constraints and multiple subarcs is considered. The possibility that the analytical form of the functions under consideration might change from one subarc to another is taken into account. The resulting formulation is particularly relevant to those problems of optimal control involving bounds on the control or the state or the time derivative of the state. For these problems, one might be unwilling to accept the simplistic view of a continuous extremal arc. Indeed, one might want to take the more realistic view of an extremal arc composed of several subarcs, some internal to the boundary being considered and some lying on the boundary. The paper ends with a section dealing with transformation techniques. This section illustrates several analytical devices by means of which a great number of problems of optimal control can be reduced to one of the formulations presented here. In particular, the following topics are treated: (i) time normalization, (ii) free initial state, (iii) bounded control, and (iv) bounded state.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号