首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new accelerated gradient method for finding the minimum of a functionf(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows: $$\tilde x = x + \delta x,\delta x = - \alpha g(x) + \sum\limits_{i = 1}^k {\beta _i \delta x_i }$$ wherex is ann-vector,g(x) is the gradient of the functionf(x), δx is the change in the position vector for the iteration under consideration, and δx i is the change in the position vector for theith previous iteration. The quantities α and β i are scalars chosen at each step so as to yield the greatest decrease in the function; the scalark denotes the number of past iterations remembered. Fork=1, the algorithm reduces to the memory gradient method of Ref. 2; it contains at most two undetermined multipliers to be optimized by a two-dimensional search. Fork=n?1, the algorithm contains at mostn undetermined multipliers to be optimized by ann-dimensional search. Two nonquadratic test problems are considered. For both problems, the memory gradient method and the supermemory gradient method are compared with the Fletcher-Reeves method and the Fletcher-Powell-Davidon method. A comparison with quasilinearization is also presented.  相似文献   

2.
In this paper, the problem of minimizing a nonlinear functionf(x) subject to a nonlinear constraint (x)=0 is considered, wheref is a scalar,x is ann-vector, and is aq-vector, withq<n. A conjugate gradient-restoration algorithm similar to those developed by Mieleet al. (Refs. 1 and 2) is employed. This particular algorithm consists of a sequence of conjugate gradient-restoration cycles. The conjugate gradient portion of each cycle is based upon a conjugate gradient algorithm that is derived for the special case of a quadratic function subject to linear constraints. This portion of the cycle involves a single step and is designed to decrease the value of the function while satisfying the constraints to first order. The restoration portion of each cycle involves one or more iterations and is designed to restore the norm of the constraint function to within a predetermined tolerance about zero.The conjugate gradient-restoration sequence is reinitialized with a simple gradient step everyn–q or less cycles. At the beginning of each simple gradient step, a positive-definite preconditioning matrix is used to accelerate the convergence of the algorithm. The preconditioner chosen,H +, is the positive-definite reflection of the Hessian matrixH. The matrixH + is defined herein to be a matrix whose eigenvectors are identical to those of the Hessian and whose eigenvalues are the moduli of the latter's eigenvalues. A singular-value decomposition is used to efficiently construct this matrix. The selection of the matrixH + as the preconditioner is motivated by the fact that gradient algorithms exhibit excellent convergence characteristics on quadratic problems whose Hessians have small condition numbers. To this end, the transforming operatorH + 1/2 produces a transformed Hessian with a condition number of one.A higher-order example, which has resulted from a new eigenstructure assignment formulation (Ref. 3), is used to illustrate the rapidity of convergence of the algorithm, along with two simpler examples.  相似文献   

3.
The problem of minimizing a function fnof(x) subject to the nonlinear constraint ?(x) = 0 is considered, where fnof is a scalar, x is an n-vector, and ? is a q-vector, with q < n. The sequential gradient-restoration algorithm (SGRA: Miele, [1, 2]) and the gradient-projection algorithm (GPA: Rosen, [3, 4]) are considered. These algorithms have one common characteristic: they are all composed of the alternate succession of gradient phases and restoration phases. However, they are different in several aspects, namely, (a) problem formulation, (b) structure of the gradient phase, and (c) structure of the restoration phase. First, a critical summary of SGRA and GPA is presented. Then, a comparison is undertaken by considering the speed of convergence and, above all, robustness (that is, the capacity of an algorithm to converge to a solution). The comparison is done through 16 numerical examples. In order to understand the separate effects of characteristics (a), (b), (c), six new experimental algorithms are generated by combining parts of Miele's algorithm with parts of Rosen's algorithm. Thus, the total number of algorithms investigated is eight. The numerical results show that Miele's method is on the average faster than Rosen's method. More importantly, regarding robustness, Miele's method compares favorably with Rosen's method. Through the examples, it is shown that Miele's advantage in robustness is more prominent as the curvature of the constraint increases. While this advantage is due to the combined effect of characteristics (a), (b), (c), it is characteristic (c) that plays the dominant role. Indeed, Miele's restoration provides a better search direction as well as better step-size control than Rosen's restoration.  相似文献   

4.
Recent advances in gradient algorithms for optimal control problems   总被引:1,自引:0,他引:1  
This paper summarizes recent advances in the area of gradient algorithms for optimal control problems, with particular emphasis on the work performed by the staff of the Aero-Astronautics Group of Rice University. The following basic problem is considered: minimize a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter π are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the sequential gradient-restoration algorithm and the combined gradient-restoration algorithm are presented. The descent properties of these algorithms are studied, and schemes to determine the optimum stepsize are discussed. Both of the above algorithms require the solution of a linear, two-point boundary-value problem at each iteration. Hence, a discussion of integration techniques is given. Next, a family of gradient-restoration algorithms is introduced. Not only does this family include the previous two algorithms as particular cases, but it allows one to generate several additional algorithms, namely, those with alternate restoration and optional restoration. Then, two modifications of the sequential gradient-restoration algorithm are presented in an effort to accelerate terminal convergence. In the first modification, the quadratic constraint imposed on the variations of the control is modified by the inclusion of a positive-definite weighting matrix (the matrix of the second derivatives of the Hamiltonian with respect to the control). The second modification is a conjugate-gradient extension of the sequential gradient-restoration algorithm. Next, the addition of a nondifferential constraint, to be satisfied everywhere along the interval of integration, is considered. In theory, this seems to be only a minor modification of the basic problem. In practice, the change is considerable in that it enlarges dramatically the number and variety of problems of optimal control which can be treated by gradient-restoration algorithms. Indeed, by suitable transformations, almost every known problem of optimal control theory can be brought into this scheme. This statement applies, for instance, to the following situations: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with equality constraints on the time rate of change of the state, (iv) problems with control inequality constraints, (v) problems with state inequality constraints, and (vi) problems with inequality constraints on the time rate of change of the state. Finally, the simultaneous presence of nondifferential constraints and multiple subarcs is considered. The possibility that the analytical form of the functions under consideration might change from one subarc to another is taken into account. The resulting formulation is particularly relevant to those problems of optimal control involving bounds on the control or the state or the time derivative of the state. For these problems, one might be unwilling to accept the simplistic view of a continuous extremal arc. Indeed, one might want to take the more realistic view of an extremal arc composed of several subarcs, some internal to the boundary being considered and some lying on the boundary. The paper ends with a section dealing with transformation techniques. This section illustrates several analytical devices by means of which a great number of problems of optimal control can be reduced to one of the formulations presented here. In particular, the following topics are treated: (i) time normalization, (ii) free initial state, (iii) bounded control, and (iv) bounded state.  相似文献   

5.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter . Here,I is a scalar,x ann-vector,u anm-vector, and ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations.Four types of gradient-restoration algorithms are considered, and their relative efficiency (in terms of number of iterations for convergence) is evaluated. The algorithms being considered are as follows: sequential gradient-restoration algorithm, complete restoration (SGRA-CR), sequential gradient-restoration algorithm, incomplete restoration (SGRA-IR), combined gradient-restoration algorithm, no restoration (CGRA-NR), and combined gradient-restoration algorithm, incomplete restoration (CGRA-IR).Evaluation of these algorithms is accomplished through six numerical examples. The results indicate that (i) the inclusion of a restoration phase is necessary for rapid convergence and (ii) while SGRA-CR is the most desirable algorithm if feasibility of the suboptimal solutions is required, rapidity of convergence to the optimal solution can be increased if one employs algorithms with incomplete restoration, in particular, CGRA-IR.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185.  相似文献   

6.
In this paper, the problem of minimizing a functionf(x) subject to a constraint (x)=0 is considered. Here,f is a scalar,x ann-vector, and aq-vector, withq<n. The use of the augmented penalty function is explored in connection with theconjugate gradient-restoration algorithm. The augmented penalty functionW(x, ,k) is defined to be the linear combination of the augmented functionF(x, ) and the constraint errorP(x), where theq-vector is the Lagrange multiplier and the scalark is the penalty constant.The conjugate gradient-restoration algorithm includes a conjugate-gradient phase involvingn-q iterations and a restoration phase involving one iteration. In the conjugate-gradient phase, one tries to improve the value of the function, while avoiding excessive constraint violation. In the restoration phase, one reduces the constraint error, while avoiding excessive change in the value of the function.Concerning the conjugate-gradient phase, two classes of algorithms are considered: for algorithms of Class I, the Lagrange multiplier is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the Lagrange multiplier is determined so that the constraint is satisfied to first order. For each class, two versions are studied. In version (), the penalty constant is held unchanged throughout the entire algorithm. In version (), the penalty constant is updated at the beginning of each conjugate-gradient phase so as to achieve certain desirable properties.Concerning the restoration phase, the minimum distance algorithm is employed. Since the use of the augmented penalty function automatically prevents excessive constraint violation, single-step restoration is considered.If the functionf(x) is quadratic and the constraint (x) is linear, all the previous algorithms are identical, that is, they produce the same sequence of points and converge to the solution in the same number of iterations. This number of iterations is at mostN*=nq if the starting pointx s is such that (x s)=0 and at mostN*=1+nq if the starting pointx s is such that (x s)0.In order to illustrate the theory, seven numerical examples are developed. The first example refers to a quadratic function and a linear constraint. The remaining examples refer to nonquadratic functions and nonlinear constraints. For the linear-quadratic example, all the algorithms behave identically, as predicted by the theory. For the nonlinear-nonquadratic examples, algorithms of Class II generally exhibit faster convergence than algorithms of Class I, and algorithms of type () generally exhibit faster convergence than algorithms of type ().This research was supported by the National Science Foundation, Grant No. GP-27271.  相似文献   

7.
This paper generalizes the penalty function method of Zang-will for scalar problems to vector problems. The vector penalty function takes the form $$g(x,\lambda ) = f(x) + \lambda ^{ - 1} P(x)e,$$ wheree ?R m, with each component equal to unity;f:R nR m, represents them objective functions {f i} defined onX \( \subseteq \) R n; λ ∈R 1, λ>0;P:R nR 1 X \( \subseteq \) Z \( \subseteq \) R n,P(x)≦0, ∨xR n,P(x) = 0 ?xX. The paper studies properties of {E (Z, λ r )} for a sequence of positive {λ r } converging to 0 in relationship toE(X), whereE(Z, λ r ) is the efficient set ofZ with respect tog(·, λr) andE(X) is the efficient set ofX with respect tof. It is seen that some of Zangwill's results do not hold for the vector problem. In addition, some new results are given.  相似文献   

8.
We consider two convex polyhedra related to the vertex packing problem for a finite, undirected, loopless graphG with no multiple edges. A characterization is given for the extreme points of the polyhedron \(\mathcal{L}_G = \{ x \in R^n :Ax \leqslant 1_m ,x \geqslant 0\} \) , whereA is them × n edge-vertex incidence matrix ofG and 1 m is anm-vector of ones. A general class of facets of = convex hull{xR n :Ax≤1 m ,x binary} is described which subsumes a class examined by Padberg [13]. Some of the results for are extended to a more general class of integer polyhedra obtained from independence systems.  相似文献   

9.
Anm-simplex x in ann-category A consists of the assignment of anr-cell x(u) to each (r + 1)-element subset u of {0, 1,..., m} such that the source and target (r−1)-cells of x(u) are appropriate composites of x(v) for v a proper subset of u. As m increases, the appropriate composites quickly become hard to write down. This paper constructs anm-categoryOm such that anm-functor x:OmA is precisely an m-simplex in A. This leads to a simplicial set ΔA, called the nerve of A, and provides the basis for cohomology with coefficients in A. Higher order equivalences in A as well as freen-categories are carefully defined. Each Om is free.  相似文献   

10.
In this paper, the problem of minimizing a functionf(x) subject to a constraint (x)=0 is considered. Here,f is a scalar,x ann-vector, and aq-vector, withq<n. The use of the augmented penalty function is explored in connection with theordinary gradient algorithm. The augmented penalty functionW(x, ,k) is defined to be the linear combination of the augmented functionF(x, ) and the constraint errorP(x), where theq-vector is the Lagrange multiplier and the scalark is the penalty constant.The ordinary gradient algorithm is constructed in such a way that the following properties are satisfied in toto or in part: (a) descent property on the augmented penalty function, (b) descent property on the augmented function, (c) descent property on the constraint error, (d) constraint satisfaction on the average, or (e) individual constraint satisfaction. Properties (d) and (e) are employed to first order only.With the above considerations in mind, two classes of algorithms are developed. For algorithms of Class I, the multiplier is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the multiplier is determined so that the constraint is satisfied to first order.Algorithms of Class I have properties (a), (b), (c) and include Algorithms (I-) and (I-). In the former algorithm, the penalty constant is held unchanged for all iterations; in the latter, the penalty constant is updated at each iteration so as to ensure satisfaction of property (d).Algorithms of Class II have properties (a), (c), (e) and include Algorithms (II-) and (II-). In the former algorithm, the penalty constant is held unchanged for all iterations; in the latter, the penalty constant is updated at each iteration so as to ensure satisfaction of property (b).Four numerical examples are presented. They show that algorithms of Class II exhibit faster convergence than algorithms of Class I and that algorithms of type () exhibit faster convergence than algorithms of type (). Therefore, Algorithm (II-) is the best among those analyzed. This is due to the fact that, in Algorithm (II-), individual constraint satisfaction is enforced and that descent properties hold for the augmented penalty function, the augmented function, and the constraint error.This research was supported by the National Science Foundation, Grant No. GP-27271. The authors are indebted to Dr. J. N. Damoulakis for analytical and numerical assistance. Discussions with Professor H. Y. Huang are acknowledged.  相似文献   

11.
Convergence results for interpolatory product rules for evaluating Cauchy principal value integrals of the form f ?1 1 v(x)f(x)/x ? λ dx wherev is an admissible weight function have been extended to integrals of the form f ?1 1 k(x)f(x)/x ? λ dx wherek is an arbitrary integrable function subject to certain conditions. Further, whereas the above convergence results were shown when the interpolation points were the Gauss points with respect to some admissible weight functionw, they are now shown to hold when the interpolation points are Radau or Lobatto points with respect tow.  相似文献   

12.
Associated with an m × n matrix with entries 0 or 1 are the m-vector of row sums and n-vector of column sums. In this article we study the set of all pairs of these row and column sums for fixed m and n. In particular, we give an algorithm for finding all such pairs for a given m and n.  相似文献   

13.
In this paper, we discuss the following inequality constrained optimization problem (P) minf(x) subject tog(x)0,g(x)=(g 1(x), ...,g r (x)) , wheref(x),g j (x)(j=1, ...,r) are locally Lipschitz functions. TheL 1 exact penalty function of the problem (P) is (PC) minf(x)+cp(x) subject tox R n , wherep(x)=max {0,g 1(x), ...,g r (x)},c>0. We will discuss the relationships between (P) and (PC). In particular, we will prove that under some (mild) conditions a local minimum of (PC) is also a local minimum of (P).  相似文献   

14.
In this paper, the problem of minimizing a functionf(x) subject to a constraint (x)=0 is considered, wheref is a scalar,x ann-vector, and aq-vector, withq <n. Several conjugate gradient-restoration algorithms are analyzed: these algorithms are composed of the alternate succession of conjugate gradient phases and restoration phases. In the conjugate gradient phase, one tries to improve the value of the function while avoiding excessive constraint violation. In the restoration phase, one tries to reduce the constraint error, while avoiding excessive change in the value of the function.Concerning the conjugate gradient phase, two classes of algorithms are considered: for algorithms of Class I, the multiplier is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the multiplier is determined so that the constraint is satisfied to first order. Concerning the restoration phase, two topics are investigated: (a) restoration type, that is, complete restoration vs incomplete restoration and (b) restoration frequency, that is, frequent restoration vs infrequent restoration.Depending on the combination of type and frequency of restoration, four algorithms are generated within Class I and within Class II, respectively: Algorithm () is characterized by complete and frequent restoration; Algorithm () is characterized by incomplete and frequent restoration; Algorithm () is characterized by complete and infrequent restoration; and Algorithm () is characterized by incomplete and infrequent restoration.If the functionf(x) is quadratic and the constraint (x) is linear, all of the previous algorithms are identical, that is, they produce the same sequence of points and converge to the solution in the same number of iterations. This number of iterations is at mostN* =nq if the starting pointx s is such that (x s)=0, and at mostN*=1+nq if the starting pointx s is such that (x s) 0.In order to illustrate the theory, five numerical examples are developed. The first example refers to a quadratic function and a linear constraint. The remaining examples refer to a nonquadratic function and a nonlinear constraint. For the linear-quadratic example, all the algorithms behave identically, as predicted by the theory. For the nonlinear-nonquadratic examples, Algorithm (II-), which is characterized by incomplete and infrequent restoration, exhibits superior convergence characteristics.It is of interest to compare Algorithm (II-) with Algorithm (I-), which is the sequential conjugate gradient-restoration algorithm of Ref. 1 and is characterized by complete and frequent restoration. For the nonlinear-nonquadratic examples, Algorithm (II-) converges to the solution in a number of iterations which is about one-half to two-thirds that of Algorithm (I-).This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67.  相似文献   

15.
A function f : N → R is called additive if f(mn)= f(m)+f(n)for all m, n with(m, n)= 1. Let μ(x)= max n≤x(f(n)f(n + 1))and ν(x)= max n≤x(f(n + 1)f(n)). In 1979, Ruzsa proved that there exists a constant c such that for any additive function f , μ(x)≤ cν(x 2 )+ c f , where c f is a constant depending only on f . Denote by R af the least such constant c. We call R af Ruzsa's constant on additive functions. In this paper, we prove that R af ≤ 20.  相似文献   

16.
We show that a functionfhaving a little uniform smoothness can be locally represented at any pointx0as [equation] wheregis an indefinitely oscillatin function; the value of β of the regularity ofrare related by the 2-microlocal regularity iff.  相似文献   

17.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the case of a quadratic functional subject to linear constraints is considered, and a conjugate-gradient algorithm is derived. Nominal functionsx(t),u(t), π satisfying all the differential equations and boundary conditions are assumed. Variations Δx(t), δu(t), Δπ are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the differential equations, the boundary conditions, and a quadratic constraint on the variations of the control and the parameter. Next, the more general case of a nonquadratic functional subject to nonlinear constraints is considered. The algorithm derived for the linear-quadratic case is employed with one modification: a restoration phase is inserted between any two successive conjugate-gradient phases. In the restoration phase, variations Δx(t), Δu(t), Δπ are determined by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Thus, a sequential conjugate-gradient-restoration algorithm is constructed in such a way that the differential equations and the boundary conditions are satisfied at the end of each complete conjugate-gradient-restoration cycle. Several numerical examples illustrating the theory of this paper are given in Part 2 (see Ref. 1). These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper. This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor A. Miele for stimulating discussions. Formerly, Graduate Studient in Aero-Astronautics, Department of Mechanical and Aerospace Engineering and Materials Science, Rice University, Houston, Texas.  相似文献   

18.
《Journal of Complexity》1999,15(1):128-147
Most research on the edit distance problem and thek-differences problem considered the set of edit operations consisting of changes, insertions, and deletions. In this paper we include theswapoperation that interchanges two adjacent characters into the set of allowable edit operations, and we present anO(t min(m, n))-time algorithm for the extended edit distance problem, wheretis the edit distance between the given strings, and anO(kn)-time algorithm for the extendedk-differences problem. That is, we add swaps into the set of edit operations without increasing the time complexities of previous algorithms that consider only changes, insertions, and deletions for the edit distance andk-differences problems.  相似文献   

19.
In geometric terms, the Ekeland variational principle says that a lower-bounded proper lower-semicontinuous functionf defined on a Banach spaceX has a point (x 0,f(x 0)) in its graph that is maximal in the epigraph off with respect to the cone order determined by the convex coneK λ = {(x, α) ∈X × ?:λ ∥x∥ ≤ ? α}, where λ is a fixed positive scalar. In this case, we write (x 0,f(x 0))∈λ-extf. Here, we investigate the following question: if (x 0,f(x 0))∈λ-extf, wheref is a convex function, and if 〈f n 〉 is a sequence of convex functions convergent tof in some sense, can (x 0,f(x 0)) be recovered as a limit of a sequence of points taken from λ-extf n ? The convergence notions that we consider are the bounded Hausdorff convergence, Mosco convergence, and slice convergence, a new convergence notion that agrees with the Mosco convergence in the reflexive setting, but which, unlike the Mosco convergence, behaves well without reflexivity.  相似文献   

20.
Letf(z) be meromorphic function of finite nonzero orderρ. Assuming certain growth estimates onf by comparing it withr ρ L(r) whereL(r) is a slowly changing function we have obtained the bounds for the zeros off(z) ?g (z) whereg (z) is a meromorphic function satisfyingT (r, g)=o {T(r, f)} asr → ∞. These bounds are satisfied but for some exceptional functions. Examples are given to show that such exceptional functions exist.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号