首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the gradient phase. It is shown that the Lagrange multipliers associated with the gradient phase not only solve the auxiliary minimization problem of the gradient phase, but are also endowed with a supplementary optimality property: they minimize the error in the optimality conditions, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to R. BellmanThis work was supported by the National Science Foundation, Grant No. ENG-79-18667.  相似文献   

2.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter . Here,I is a scalar,x ann-vector,u anm-vector, and ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. Asequential algorithm composed of the alternate succession of gradient phases and restoration phases is presented. This sequential algorithm is contructed in such a way that the differential equations and boundary conditions are satisfied at the end of each iteration, that is, at the end of a complete gradient-restoration phase; hence, the value of the functional at the end of one iteration is comparable with the value of the functional at the end of any other iteration.In thegradient phase, nominal functionsx(t),u(t), satisfying all the differential equations and boundary conditions are assumed. Variations x(t), u(t), leading to varied functions (t),(t), are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the linearized differential equations, the linearized boundary conditions, and a quadratic constraint on the variations of the control and the parameter.Since the constraints are satisfied only to first order during the gradient phase, the functions (t),(t), may violate the differential equations and/or the boundary conditions. This being the case, a restoration phase is needed prior to starting the next gradient phase. In thisrestoration phase, the functions (t),(t), are assumed to be the nominal functions. Variations (t), (t), leading to varied functions (t),û(t), consistent with all the differential equations and boundary conditions are determined. These variations are obtained by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Of course, the restoration phase must be performed iteratively until the cumulative error in the differential equations and boundary conditions becomes smaller than some preselected value.If the gradient stepsize is , an order-of-magnitude analysis shows that the gradient corrections are x=O(), u=O(), =O(), while the restoration corrections are . Hence, for sufficiently small, the restoration phase preserves the descent property of the gradient phase: the functionalI decreases between any two successive iterations.Methods to determine the gradient stepsize in an optimal fashion are discussed. Examples are presented for both the fixed-final-time case and the free-final-time case. The numerical results show the rapid convergence characteristics of the sequential gradient-restoration algorithm.The portions of this paper dealing with the fixed-final-time case were presented by the senior author at the 2nd Hawaii International Conference on System Sciences, Honolulu, Hawaii, 1969. The portions of this paper dealing with the free-final-time case were presented by the senior author at the 20th International Astronautical Congress, Mar del Plata, Argentina, 1969. This research, supported by the NASA-Manned Spacecraft Center, Grant No. NGR-44-006-089, Supplement No. 1, is a condensation of the investigations presented in Refs. 1–5. The authors are indebted to Professor H. Y. Huang for helpful discussions.  相似文献   

3.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the restoration phase. It is shown that the Lagrange multipliers associated with the restoration phase not only solve the auxiliary minimization problem of the restoration phase, but are also endowed with a supplementary optimality property: they minimize a special functional, quadratic in the multipliers, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to L. CesariThis work was supported by a grant of the National Science Foundation.  相似文献   

4.
This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, a state inequality constraint, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy.The approach taken is a sequence of two-phase processes or cycles, composed of a gradient phase and a restoration phase. The gradient phase involves a single iteration and is designed to decrease the functional, while the constraints are satisfied to first order. The restoration phase involves one or several iterations and is designed to restore the constraints to a predetermined accuracy, while the norm of the variations of the control and the parameter is minimized. The principal property of the algorithm is that it produces a sequence of feasible suboptimal solutions: the functionsx(t),u(t), obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the functionals of any two elements of the sequence are comparable.Here, the state inequality constraint is handled in a direct manner. A predetermined number and sequence of subarcs is assumed and, for the time interval for which the trajectory of the system lies on the state boundary, the control is determined so that the state boundary is satisfied. The state boundary and the entrance conditions are assumed to be linear inx and , and the sequential gradient-restoration algorithm is constructed in such a way that the state inequality constraint is satisfied at each iteration of the gradient phase and the restoration phase along all of the subarcs composing the trajectory.At first glance, the assumed linearity of the state boundary and the entrance conditions appears to be a limitation to the theory. Actually, this is not the case. The reason is that every constrained minimization problem can be brought to the present form through the introduction of additional state variables.To facilitate the numerical solution on digital computers, the actual time is replaced by the normalized timet, defined in such a way that each of the subarcs composing the extremal arc has a normalized time length t=1. In this way, variable-time corner conditions and variable-time terminal conditions are transformed into fixed-time corner conditions and fixed-time terminal conditions. The actual times 1, 2, at which (i) the state boundary is entered, (ii) the state boundary is exited, and (iii) the terminal boundary is reached are regarded to be components of the parameter being optimized.The numerical examples illustrating the theory demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.This paper is based in part on a portion of the dissertation which the first author submitted in partial fulfillment of the requirements for the PhD Degree at the Air Force Institute of Technology, Wright-Patterson AFB, Ohio. This research was supported in part by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor H. Y. Huang, Dr. R. R. Iyer, Dr. J. N. Damoulakis, Mr. A. Esterle, and Mr. J. R. Cloutier for helpful discussions as well as analytical and numerical assistance. This paper is a condensation of the investigations reported in Refs. 1–2.  相似文献   

5.
This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, nondifferential constraints, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter π so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy. The approach taken is a sequence of two-phase processes or cycles, composed of a gradient phase and a restoration phase. The gradient phase involves a single iteration and is designed to decrease the functional, while the constraints are satisfied to first order. The restoration phase involves one or several iterations and is designed to restore the constraints to a predetermined accuracy, while the norm of the variations of the control and the parameter is minimized. The principal property of the algorithm is that it produces a sequence of feasible suboptimal solutions: the functionsx(t),u(t), π obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the functionals of any two elements of the sequence are comparable. The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, and the stepsize of the restoration phase by a one-dimensional search on the constraint errorP. If α g is the gradient stepsize and α r is the restoration stepsize, the gradient corrections are ofO g ) and the restoration corrections are ofO r α g 2). Therefore, for α g sufficiently small, the restoration phase preserves the descent property of the gradient phase: the functionalÎ at the end of any complete gradient-restoration cycle is smaller than the functionalI at the beginning of the cycle. To facilitate the numerical solution on digital computers, the actual time ? is replaced by the normalized timet, defined in such a way that the extremal arc has a normalized time length Δt=1. In this way, variable-time terminal conditions are transformed into fixed-time terminal conditions. The actual time τ at which the terminal boundary is reached is regarded to be a component of the parameter π being optimized. The present general formulation differs from that of Ref. 4 because of the inclusion of the nondifferential constraints to be satisfied everywhere over the interval 0 ≤t ≤ 1. Its importance lies in that (i) many optimization problems arise directly in the form considered here, (ii) problems involving state equality constraints can be reduced to the present scheme through suitable transformations, and (iii) problems involving inequality constraints can be reduced to the present scheme through suitable transformations. The latter statement applies, for instance, to the following situations: (a) problems with bounded control, (b) problems with bounded state, (c) problems with bounded time rate of change of the state, and (d) problems where some bound is imposed on an arbitrarily prescribed function of the parameter, the control, the state, and the time rate of change of the state. Numerical examples are presented for both the fixed-final-time case and the free-final-time case. These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.  相似文献   

6.
This paper considers the numerical solution of two classes of optimal control problems, called Problem P1 and Problem P2 for easy identification.Problem P1 involves a functionalI subject to differential constraints and general boundary conditions. It consists of finding the statex(t), the controlu(t), and the parameter so that the functionalI is minimized, while the constraints and the boundary conditions are satisfied to a predetermined accuracy. Problem P2 extends Problem P1 to include nondifferential constraints to be satisfied everywhere along the interval of integration. Algorithms are developed for both Problem P1 and Problem P2.The approach taken is a sequence of two-phase cycles, composed of a gradient phase and a restoration phase. The gradient phase involves one iteration and is designed to decrease the value of the functional, while the constraints are satisfied to first order. The restoration phase involves one or more iterations and is designed to force constraint satisfaction to a predetermined accuracy, while the norm squared of the variations of the control, the parameter, and the missing components of the initial state is minimized.The principal property of both algorithms is that they produce a sequence of feasible suboptimal solutions: the functions obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the values of the functionalI corresponding to any two elements of the sequence are comparable.The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, while the stepsize of the restoration phase is obtained by a one-dimensional search on the constraint errorP. The gradient stepsize and the restoration stepsize are chosen so that the restoration phase preserves the descent property of the gradient phase. Therefore, the value of the functionalI at the end of any complete gradient-restoration cycle is smaller than the value of the same functional at the beginning of that cycle.The algorithms presented here differ from those of Refs. 1 and 2, in that it is not required that the state vector be given at the initial point. Instead, the initial conditions can be absolutely general. In analogy with Refs. 1 and 2, the present algorithms are capable of handling general final conditions; therefore, they are suited for the solution of optimal control problems with general boundary conditions. Their importance lies in the fact that many optimal control problems involve initial conditions of the type considered here.Six numerical examples are presented in order to illustrate the performance of the algorithms associated with Problem P1 and Problem P2. The numerical results show the feasibility as well as the convergence characteristics of these algorithms.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-76-3075. Partial support for S. Gonzalez was provided by CONACYT, Consejo Nacional de Ciencia y Tecnologia, Mexico City, Mexico.  相似文献   

7.
Recent advances in gradient algorithms for optimal control problems   总被引:1,自引:0,他引:1  
This paper summarizes recent advances in the area of gradient algorithms for optimal control problems, with particular emphasis on the work performed by the staff of the Aero-Astronautics Group of Rice University. The following basic problem is considered: minimize a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter π are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the sequential gradient-restoration algorithm and the combined gradient-restoration algorithm are presented. The descent properties of these algorithms are studied, and schemes to determine the optimum stepsize are discussed. Both of the above algorithms require the solution of a linear, two-point boundary-value problem at each iteration. Hence, a discussion of integration techniques is given. Next, a family of gradient-restoration algorithms is introduced. Not only does this family include the previous two algorithms as particular cases, but it allows one to generate several additional algorithms, namely, those with alternate restoration and optional restoration. Then, two modifications of the sequential gradient-restoration algorithm are presented in an effort to accelerate terminal convergence. In the first modification, the quadratic constraint imposed on the variations of the control is modified by the inclusion of a positive-definite weighting matrix (the matrix of the second derivatives of the Hamiltonian with respect to the control). The second modification is a conjugate-gradient extension of the sequential gradient-restoration algorithm. Next, the addition of a nondifferential constraint, to be satisfied everywhere along the interval of integration, is considered. In theory, this seems to be only a minor modification of the basic problem. In practice, the change is considerable in that it enlarges dramatically the number and variety of problems of optimal control which can be treated by gradient-restoration algorithms. Indeed, by suitable transformations, almost every known problem of optimal control theory can be brought into this scheme. This statement applies, for instance, to the following situations: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with equality constraints on the time rate of change of the state, (iv) problems with control inequality constraints, (v) problems with state inequality constraints, and (vi) problems with inequality constraints on the time rate of change of the state. Finally, the simultaneous presence of nondifferential constraints and multiple subarcs is considered. The possibility that the analytical form of the functions under consideration might change from one subarc to another is taken into account. The resulting formulation is particularly relevant to those problems of optimal control involving bounds on the control or the state or the time derivative of the state. For these problems, one might be unwilling to accept the simplistic view of a continuous extremal arc. Indeed, one might want to take the more realistic view of an extremal arc composed of several subarcs, some internal to the boundary being considered and some lying on the boundary. The paper ends with a section dealing with transformation techniques. This section illustrates several analytical devices by means of which a great number of problems of optimal control can be reduced to one of the formulations presented here. In particular, the following topics are treated: (i) time normalization, (ii) free initial state, (iii) bounded control, and (iv) bounded state.  相似文献   

8.
In Refs. 1–2, the sequential gradient-restoration algorithm and the modified quasilinearization algorithm were developed for optimal control problems with bounded state. These algorithms have a basic property: for a subarc lying on the state boundary, the state boundary equations are satisfied at every iteration, if they are satisfied at the beginning of the computational process. Thus, the subarc remains anchored on the state boundary. In this paper, the anchoring conditions employed in Refs. 1–2 are derived.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185.  相似文献   

9.
We obtain sufficient conditions for the convergence of iteration algorithms in Banach spaces. These conditions imply the weak convergence of subsequences to the set of solutions. Bibliography: 4 titles. Translated fromObchyslyuval'na ta Prykladna Matematyka, No. 81 1997, pp. 70–71.  相似文献   

10.
Two existing function-space quasi-Newton algorithms, the Davidon algorithm and the projected gradient algorithm, are modified so that they may handle directly control-variable inequality constraints. A third quasi-Newton-type algorithm, developed by Broyden, is extended to optimal control problems. The Broyden algorithm is further modified so that it may handle directly control-variable inequality constraints. From a computational viewpoint, dyadic operator implementation of quasi-Newton methods is shown to be superior to the integral kernel representation. The quasi-Newton methods, along with the steepest descent method and two conjugate gradient algorithms, are simulated on three relatively simple (yet representative) bounded control problems, two of which possess singular subarcs. Overall, the Broyden algorithm was found to be superior. The most notable result of the simulations was the clear superiority of the Broyden and Davidon algorithms in producing a sharp singular control subarc.This research was supported by the National Science Foundation under Grant Nos. GK-30115 and ENG 74-21618 and by the National Aeronautics and Space Administration under Contract No. NAS 9-12872.  相似文献   

11.
Journal of Optimization Theory and Applications - This paper presents eight algorithms for solving optimal control problems with general constraints on the control and inequality constraints on the...  相似文献   

12.
This paper describes a collection of parallel optimal control algorithms which are suitable for implementation on an advanced computer with the facility for large-scale parallel processing. Specifically, a parallel nongradient algorithm and a parallel variablemetric algorithm are used to search for the initial costate vector that defines the solution to the optimal control problem. To avoid the computational problems sometimes associated with simultaneous forward integration of both the state and costate equations, a parallel shooting procedure based upon partitioning of the integration interval is considered. To further speed computations, parallel integration methods are proposed. Application of this all-parallel procedure to a forced Van der Pol system indicates that convergence time is significantly less than that required by highly efficient serial procedures.This research was supported in part by the Air Force Office of Scientific Research, Air Force Systems Command, USAF, under Grant No. AFOSR-77-3418.  相似文献   

13.
N. Alon  Y. Azar 《Combinatorica》1991,11(2):97-122
Suppose we haven elements from a totally ordered domain, and we are allowed to performp parallel comparisons in each time unit (=round). In this paper we determine, up to a constant factor, the time complexity of several approximation problems in the common parallel comparison tree model of Valiant, for all admissible values ofn, p and , where is an accuracy parameter determining the quality of the required approximation. The problems considered include the approximate maximum problem, approximate sorting and approximate merging. Our results imply as special cases, all the known results about the time complexity for parallel sorting, parallel merging and parallel selection of the maximum (in the comparison model), up to a constant factor. We mention one very special but representative result concerning the approximate maximum problem; suppose we wish to find, among the givenn elements, one which belongs to the biggestn/2, where in each round we are allowed to askn binary comparisons. We show that log* n+O(1) rounds are both necessary and sufficient in the best algorithm for this problem.Research supported in part by Allon Fellowship, by a Bat Sheva de Rothschild grant and by the Fund for Basic Research administered by the Israel Academy of Sciences.  相似文献   

14.
This paper studies some aspects of information-based complexity theory applied to estimation, identification, and prediction problems. Particular emphasis is given to constructive aspects of optimal algorithms and optimal information, taking into account the characteristics of certain types of problems. Special attention is devoted to the investigation of strongly optimal algorithms and optimal information in the linear case. Two main results are obtained for the class of problems considered. First, central algorithms are proved to be strongly optimal. Second, a simple solution is given to a particular case of optimal information, called optimal sampling design, which is of great interest in system and identification theory.  相似文献   

15.
Three augmented penalty function algorithms are tested and compared with an ordinary penalty function algorithm for two demonstration optimal control problems. Although the augmented penalty function is quite helpful in solving control problems with terminal state constraints, the convergence can be improved significantly by providing systematic increases in the penalty constant.  相似文献   

16.
The Hopcroft-Tarjan and Lempel-Even-Cederbaum algorithms have generally been viewed as different approaches to planarity testing and graph embedding. Canfield and Williamson proved that, with slight modification to the Hopcroft-Tarjan algorithm, these two algorithms can be structured in such a way that they are indistinguishabel on all planar graphs in terms of the order in which the vertices are processed the situation in the case of nonplanar graphs is not discussed by Canfield and Williamson and is, in fact, much more complex. We extend the bijective techniques for comparing these two algorithms to the nonplanar case. Based on a classification scheme for the Structure of overlap graphs, we precisely characterize when one of these algorithms performs better than the other.  相似文献   

17.
18.
In this paper, we present a model which characterizes distributed computing algorithms. The goals of this model are to offer an abstract representation of asynchronous and heterogeneous distributed systems, to present a mechanism for specifying externally observable behaviours of distributed processes and to provide rules for combining these processes into networks with desired properties (good functioning, fairness...). Once these good properties are found, the determination of the optimal rules are studied.Subsequently, the model is applied to three classical distributed computing problems: namely the dining philosophers problem, the mutual exclusion problem and the deadlock problem, (generalizing results of our previous publications [1], [2]). The property of fairness has a special position that we discuss.  相似文献   

19.
Journal of Optimization Theory and Applications - This paper presents two demonstrably convergent, first-order, differential dynamic programming algorithms for the solution of optimal control...  相似文献   

20.
Many space mission planning problems may be formulated as hybrid optimal control problems, i.e. problems that include both continuous-valued variables and categorical (binary) variables. There may be thousands to millions of possible solutions; a current practice is to pre-prune the categorical state space to limit the number of possible missions to a number that may be evaluated via total enumeration. Of course this risks pruning away the optimal solution. The method developed here avoids the need for pre-pruning by incorporating a new solution approach using nested genetic algorithms; an outer-loop genetic algorithm that optimizes the categorical variable sequence and an inner-loop genetic algorithm that can use either a shape-based approximation or a Lambert problem solver to quickly locate near-optimal solutions and return the cost to the outer-loop genetic algorithm. This solution technique is tested on three asteroid tour missions of increasing complexity and is shown to yield near-optimal, and possibly optimal, missions in many fewer evaluations than total enumeration would require.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号