首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Recent advances in gradient algorithms for optimal control problems   总被引:1,自引:0,他引:1  
This paper summarizes recent advances in the area of gradient algorithms for optimal control problems, with particular emphasis on the work performed by the staff of the Aero-Astronautics Group of Rice University. The following basic problem is considered: minimize a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter π are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the sequential gradient-restoration algorithm and the combined gradient-restoration algorithm are presented. The descent properties of these algorithms are studied, and schemes to determine the optimum stepsize are discussed. Both of the above algorithms require the solution of a linear, two-point boundary-value problem at each iteration. Hence, a discussion of integration techniques is given. Next, a family of gradient-restoration algorithms is introduced. Not only does this family include the previous two algorithms as particular cases, but it allows one to generate several additional algorithms, namely, those with alternate restoration and optional restoration. Then, two modifications of the sequential gradient-restoration algorithm are presented in an effort to accelerate terminal convergence. In the first modification, the quadratic constraint imposed on the variations of the control is modified by the inclusion of a positive-definite weighting matrix (the matrix of the second derivatives of the Hamiltonian with respect to the control). The second modification is a conjugate-gradient extension of the sequential gradient-restoration algorithm. Next, the addition of a nondifferential constraint, to be satisfied everywhere along the interval of integration, is considered. In theory, this seems to be only a minor modification of the basic problem. In practice, the change is considerable in that it enlarges dramatically the number and variety of problems of optimal control which can be treated by gradient-restoration algorithms. Indeed, by suitable transformations, almost every known problem of optimal control theory can be brought into this scheme. This statement applies, for instance, to the following situations: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with equality constraints on the time rate of change of the state, (iv) problems with control inequality constraints, (v) problems with state inequality constraints, and (vi) problems with inequality constraints on the time rate of change of the state. Finally, the simultaneous presence of nondifferential constraints and multiple subarcs is considered. The possibility that the analytical form of the functions under consideration might change from one subarc to another is taken into account. The resulting formulation is particularly relevant to those problems of optimal control involving bounds on the control or the state or the time derivative of the state. For these problems, one might be unwilling to accept the simplistic view of a continuous extremal arc. Indeed, one might want to take the more realistic view of an extremal arc composed of several subarcs, some internal to the boundary being considered and some lying on the boundary. The paper ends with a section dealing with transformation techniques. This section illustrates several analytical devices by means of which a great number of problems of optimal control can be reduced to one of the formulations presented here. In particular, the following topics are treated: (i) time normalization, (ii) free initial state, (iii) bounded control, and (iv) bounded state.  相似文献   

3.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter . Here,I is a scalar,x ann-vector,u anm-vector, and ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations.Four types of gradient-restoration algorithms are considered, and their relative efficiency (in terms of number of iterations for convergence) is evaluated. The algorithms being considered are as follows: sequential gradient-restoration algorithm, complete restoration (SGRA-CR), sequential gradient-restoration algorithm, incomplete restoration (SGRA-IR), combined gradient-restoration algorithm, no restoration (CGRA-NR), and combined gradient-restoration algorithm, incomplete restoration (CGRA-IR).Evaluation of these algorithms is accomplished through six numerical examples. The results indicate that (i) the inclusion of a restoration phase is necessary for rapid convergence and (ii) while SGRA-CR is the most desirable algorithm if feasibility of the suboptimal solutions is required, rapidity of convergence to the optimal solution can be increased if one employs algorithms with incomplete restoration, in particular, CGRA-IR.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185.  相似文献   

4.
The sequential gradient-restoration algorithm (SGRA) was developed in the late 1960s for the solution of equality-constrained nonlinear programs and has been successfully implemented by Miele and coworkers on many large-scale problems. The algorithm consists of two major sequentially applied phases. The first is a gradient-type minimization in a subspace tangent to the constraint surface, and the second is a feasibility restoration procedure. In Part 1, the original SGRA algorithm is described and is compared with two other related methods: the gradient projection and the generalized reduced gradient methods. Next, the special case of linear equalities is analyzed. It is shown that, in this case, only the gradient-type minimization phase is needed, and the SGRA becomes identical to the steepest-descent method. Convergence proofs for the nonlinearly constrained case are given in Part 2.Partial support for this work was provided by the Fund for the Promotion of Research at Technion, Israel Institute of Technology, Haifa, Israel.  相似文献   

5.
In this paper, the problem of minimizing a nonlinear functionf(x) subject to a nonlinear constraint (x)=0 is considered, wheref is a scalar,x is ann-vector, and is aq-vector, withq<n. A conjugate gradient-restoration algorithm similar to those developed by Mieleet al. (Refs. 1 and 2) is employed. This particular algorithm consists of a sequence of conjugate gradient-restoration cycles. The conjugate gradient portion of each cycle is based upon a conjugate gradient algorithm that is derived for the special case of a quadratic function subject to linear constraints. This portion of the cycle involves a single step and is designed to decrease the value of the function while satisfying the constraints to first order. The restoration portion of each cycle involves one or more iterations and is designed to restore the norm of the constraint function to within a predetermined tolerance about zero.The conjugate gradient-restoration sequence is reinitialized with a simple gradient step everyn–q or less cycles. At the beginning of each simple gradient step, a positive-definite preconditioning matrix is used to accelerate the convergence of the algorithm. The preconditioner chosen,H +, is the positive-definite reflection of the Hessian matrixH. The matrixH + is defined herein to be a matrix whose eigenvectors are identical to those of the Hessian and whose eigenvalues are the moduli of the latter's eigenvalues. A singular-value decomposition is used to efficiently construct this matrix. The selection of the matrixH + as the preconditioner is motivated by the fact that gradient algorithms exhibit excellent convergence characteristics on quadratic problems whose Hessians have small condition numbers. To this end, the transforming operatorH + 1/2 produces a transformed Hessian with a condition number of one.A higher-order example, which has resulted from a new eigenstructure assignment formulation (Ref. 3), is used to illustrate the rapidity of convergence of the algorithm, along with two simpler examples.  相似文献   

6.
Described is a not-a-priori-exponential algorithm which for each n×n interval matrix A and for each interval n-vector in a finite number of steps either computes the interval hull of the solution set of the system of interval linear equations Ax=b, or finds a singular matrix SA.  相似文献   

7.
The problem of minimizing a functionf(x) subject to the constraint ?(x)=0 is considered. Here,f is a scalar,x is ann-vector, and ? is anm-vector, wherem <n. A general quadratically convergent algorithm is presented. The conjugate-gradient algorithm and the variable-metric algorithms for constrained function minimization can be obtained as particular cases of the general algorithm. It is shown that, for a quadratic function subject to a linear constraint, all the particular algorithms behave identically if the one-dimensional search for the stepsize is exact. Specifically, they all produce the same sequence of points and lead to the constrained minimal point in no more thann ?r descent steps, wherer is the number of linearly independent constraints. The algorithms are then modified so that they can also be employed for a nonquadratic function subject to a nonlinear constraint. Some particular algorithms are tested through several numerical examples.  相似文献   

8.
In this paper, the problem of minimizing a functionf(x) subject to a constraint (x)=0 is considered, wheref is a scalar,x ann-vector, and aq-vector, withq <n. Several conjugate gradient-restoration algorithms are analyzed: these algorithms are composed of the alternate succession of conjugate gradient phases and restoration phases. In the conjugate gradient phase, one tries to improve the value of the function while avoiding excessive constraint violation. In the restoration phase, one tries to reduce the constraint error, while avoiding excessive change in the value of the function.Concerning the conjugate gradient phase, two classes of algorithms are considered: for algorithms of Class I, the multiplier is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the multiplier is determined so that the constraint is satisfied to first order. Concerning the restoration phase, two topics are investigated: (a) restoration type, that is, complete restoration vs incomplete restoration and (b) restoration frequency, that is, frequent restoration vs infrequent restoration.Depending on the combination of type and frequency of restoration, four algorithms are generated within Class I and within Class II, respectively: Algorithm () is characterized by complete and frequent restoration; Algorithm () is characterized by incomplete and frequent restoration; Algorithm () is characterized by complete and infrequent restoration; and Algorithm () is characterized by incomplete and infrequent restoration.If the functionf(x) is quadratic and the constraint (x) is linear, all of the previous algorithms are identical, that is, they produce the same sequence of points and converge to the solution in the same number of iterations. This number of iterations is at mostN* =nq if the starting pointx s is such that (x s)=0, and at mostN*=1+nq if the starting pointx s is such that (x s) 0.In order to illustrate the theory, five numerical examples are developed. The first example refers to a quadratic function and a linear constraint. The remaining examples refer to a nonquadratic function and a nonlinear constraint. For the linear-quadratic example, all the algorithms behave identically, as predicted by the theory. For the nonlinear-nonquadratic examples, Algorithm (II-), which is characterized by incomplete and infrequent restoration, exhibits superior convergence characteristics.It is of interest to compare Algorithm (II-) with Algorithm (I-), which is the sequential conjugate gradient-restoration algorithm of Ref. 1 and is characterized by complete and frequent restoration. For the nonlinear-nonquadratic examples, Algorithm (II-) converges to the solution in a number of iterations which is about one-half to two-thirds that of Algorithm (I-).This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67.  相似文献   

9.
Dinic has shown that the classic maximum flow problem on a graph of n vertices and m edges can be reduced to a sequence of at most n ? 1 so-called ‘blocking flow’ problems on acyclic graphs. For dense graphs, the best time bound known for the blocking flow problems is O(n2). Karzanov devised the first O(n2)-time blocking flow algorithm, which unfortunately is rather complicated. Later Malhotra, Kumar and Maheshwari devise another O(n2)-time algorithm, which is conceptually very simple but has some other drawbacks. In this paper we propose a simplification of Karzanov's algorithm that is easier to implement than Malhotra, Kumar and Maheshwari's method.  相似文献   

10.
The recursive projection algorithm derived in a previous paper is related to several well-known methods of numerical analysis such as the conjugate gradient method, Rosen's method and Henrici's. It is connected with the general interpolation problem, with extrapolation methods, with orthogonal projection on a subspace and with Fourier expansions. Several other connections and applications are presented.  相似文献   

11.
Let K be a field of characteristic 0 and consider exterior algebras of finite dimensional K-vector spaces. In this short paper we exhibit principal quadric ideals in a family whose Castelnuovo–Mumford regularity is unbounded. This negatively answers the analogue of Stillman's Question for exterior algebras posed by I. Peeva. We show that, via the Bernstein–Gel'fand–Gel'fand correspondence, these examples also yields counterexamples to a conjecture of J. Herzog on the Betti numbers in the linear strand of syzygy modules over polynomial rings.  相似文献   

12.
In this paper we present algorithms, which given a circular arrangement of n uniquely numbered processes, determine the maximum number in a distributive manner. We begin with a simple unidirectional algorithm, in which the number of messages passed is bounded by 2 n log n + O(n). By making several improvements to the simple algorithm, we obtain a unidirectional algorithm in which the number of messages passed is bounded by 1.5nlogn + O(n). These algorithms disprove Hirschberg and Sinclair's conjecture that O(n2) is a lower bound on the number of messages passed in undirectional algorithms for this problem. At the end of the paper we indicate how our methods can be used to improve an algorithm due to Peterson, to obtain a unidirectional algorithm using at most 1.356nlogn + O(n) messages. This is the best bound so far on the number of messages passed in both the bidirectional and unidirectional cases.  相似文献   

13.
We compare several algorithms for computing the discrete Fourier transform of n numbers. The number of “operations” of the original Cooley-Tukey algorithm is approximately 2nA(n), where A(n) is the sum of the prime divisors of n. We show that the average number of operations satisfies 1x)∑n≤x2n A(n) ~ (π29)(x2log x). The average is not a good indication of the number of operations. For example, it is shown that for about half of the integers n less than x, the number of “operations” is less than n1.61. A similar analysis is given for Good's algorithm and for two algorithms that compute the discrete Fourier transform in O(n log n) operations: the chirp-z transform and the mixed-radix algorithm that computes the transform of a series of prime length p in O(p log p) operations.  相似文献   

14.
Rapid progresses in information and computer technology allow the development of more advanced optimal control algorithms dealing with real-world problems. In this paper, which is Part 1 of a two-part sequence, a multiple-subarc gradient-restoration algorithm (MSGRA) is developed. We note that the original version of the sequential gradient-restoration algorithm (SGRA) was developed by Miele et al. in single-subarc form (SSGRA) during the years 1968–86; it has been applied successfully to solve a large number of optimal control problems of atmospheric and space flight.MSGRA is an extension of SSGRA, the single-subarc gradient-restoration algorithm. The primary reason for MSGRA is to enhance the robustness of gradient-restoration algorithms and also to enlarge the field of applications. Indeed, MSGRA can be applied to optimal control problems involving multiple subsystems as well as discontinuities in the state and control variables at the interface between contiguous subsystems.Two features of MSGRA are increased automation and efficiency. The automation of MSGRA is enhanced via time normalization: the actual time domain is mapped into a normalized time domain such that the normalized time length of each subarc is 1. The efficiency of MSGRA is enhanced by using the method of particular solutions to solve the multipoint boundary-value problems associated with the gradient phase and the restoration phase of the algorithm.In a companion paper [Part 2 (Ref. 2)], MSGRA is applied to compute the optimal trajectory for a multistage launch vehicle design, specifically, a rocket-powered spacecraft ascending from the Earth surface to a low Earth orbit (LEO). Single-stage, double-stage, and triple-stage configurations are considered and compared.  相似文献   

15.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the case of a quadratic functional subject to linear constraints is considered, and a conjugate-gradient algorithm is derived. Nominal functionsx(t),u(t), π satisfying all the differential equations and boundary conditions are assumed. Variations Δx(t), δu(t), Δπ are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the differential equations, the boundary conditions, and a quadratic constraint on the variations of the control and the parameter. Next, the more general case of a nonquadratic functional subject to nonlinear constraints is considered. The algorithm derived for the linear-quadratic case is employed with one modification: a restoration phase is inserted between any two successive conjugate-gradient phases. In the restoration phase, variations Δx(t), Δu(t), Δπ are determined by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Thus, a sequential conjugate-gradient-restoration algorithm is constructed in such a way that the differential equations and the boundary conditions are satisfied at the end of each complete conjugate-gradient-restoration cycle. Several numerical examples illustrating the theory of this paper are given in Part 2 (see Ref. 1). These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper. This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor A. Miele for stimulating discussions. Formerly, Graduate Studient in Aero-Astronautics, Department of Mechanical and Aerospace Engineering and Materials Science, Rice University, Houston, Texas.  相似文献   

16.
Raney's algorithm for computing the continued fraction expansion of y = (ax + b)(cx + d) from that of x is formally described. Some variants of the algorithm are also presented, and connections with classical number-theoretic results are briefly discussed.  相似文献   

17.
A tournament T on any set X is a dyadic relation such that for any x, yX (a) (x, x) ? T and (b) if xy then (x, y) ∈ T iff (y, x) ? T. The score vector of T is the cardinal valued function defined by R(x) = |{yX : (x, y) ∈ T}|. We present theorems for infinite tournaments analogous to Landau's necessary and sufficient conditions that a vector be the score vector for some finite tournament. Included also is a new proof of Landau's theorem based on a simple application of the “marriage” theorem.  相似文献   

18.
A technique is shown for speeding up parallel evaluation of functions. An algorithm is presented that evaluates powers xn in time O(√log n) using O(n) processors and certain preprocessed data, while the known algorithms take [log2n] steps of multiplication or addition.  相似文献   

19.
A technique for implementing Dijkstra's shortest paths algorithm is proposed. This method runs in O(mlog logD) time in the worst case, where m is the number of edges and D the length of the longest edge in the graph.  相似文献   

20.
The construction of minimum spanning trees (MSTs) of weighted graphs is a problem that arises in many applications. In this paper we will study a new parallel algorithm that constructs an MST of an N-node graph in time proportional to N lg N, on an N(lg N)-processor computing system. The primary theoretical contribution of this paper is the new algorithm, which is an improvement over Sollin's parallel MST algorithm in several ways. On a more practical level, this algorithm is appropriate for implementation in VLSI technology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号