首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.  相似文献   

2.
Conjugate Directions without Linear Searches   总被引:1,自引:0,他引:1  
A modified form of the Quasi-Newton family of variable metricalgorithms used in function minimization is proposed that hasquadratic termination without requiring linear searches. Mostmembers of the Quasi-Newton family rely for quadratic terminationon the fact that with accurate linear searches the directionsgenerated, form a conjugate set when the function is quadratic.With some members of the family the convergence of the sequenceof approximate inverse Hessian matrices to the true inverseHessian is also stable. With the proposed modification the samesequence of matrices and the same set of conjugate directionsare generated without accurate linear searches. On a quadratic function the proposal is also related to a suggestionby Hestenes which generates the same set of conjugate directionswithout accurate linear searches. Both methods therefore findthe minimum of an n dimensional quadratic function in at mostn+2 function and gradient calls. On non-quadratic functions the proposal retains the main advantagesclaimed for both the stable Quasi-Newton and Hestenes approaches.It shows promise in that it is competitive with the most efficientunconstrained optimization algorithms currently available.  相似文献   

3.
We propose an interior point method for large-scale convex quadratic programming where no assumptions are made about the sparsity structure of the quadratic coefficient matrixQ. The interior point method we describe is a doubly iterative algorithm that invokes aconjugate projected gradient procedure to obtain the search direction. The effect is thatQ appears in a conjugate direction routine rather than in a matrix factorization. By doing this, the matrices to be factored have the same nonzero structure as those in linear programming. Further, one variant of this method istheoretically convergent with onlyone matrix factorization throughout the procedure.  相似文献   

4.
A new class of quasi-Newton methods is introduced that can locate a unique stationary point of ann-dimensional quadratic function in at mostn steps. When applied to positive-definite or negative-definite quadratic functions, the new class is identical to Huang's symmetric family of quasi-Newton methods (Ref. 1). Unlike the latter, however, the new family can handle indefinite quadratic forms and therefore is capable of solving saddlepoint problems that arise, for instance, in constrained optimization. The novel feature of the new class is a planar iteration that is activated whenever the algorithm encounters a near-singular direction of search, along which the objective function approaches zero curvature. In such iterations, the next point is selected as the stationary point of the objective function over a plane containing the problematic search direction, and the inverse Hessian approximation is updated with respect to that plane via a new four-parameter family of rank-three updates. It is shown that the new class possesses properties which are similar to or which generalize the properties of Huang's family. Furthermore, the new method is equivalent to Fletcher's (Ref. 2) modified version of Luenberger's (Ref. 3) hyperbolic pairs method, with respect to the metric defined by the initial inverse Hessian approximation. Several issues related to implementing the proposed method in nonquadratic cases are discussed.An earlier version of this paper was presented at the 10th Mathematical Programing Symposium, Montreal, Canada, 1979.  相似文献   

5.
The problem of minimizing a functionf(x) subject to the constraint ?(x)=0 is considered. Here,f is a scalar,x is ann-vector, and ? is anm-vector, wherem <n. A general quadratically convergent algorithm is presented. The conjugate-gradient algorithm and the variable-metric algorithms for constrained function minimization can be obtained as particular cases of the general algorithm. It is shown that, for a quadratic function subject to a linear constraint, all the particular algorithms behave identically if the one-dimensional search for the stepsize is exact. Specifically, they all produce the same sequence of points and lead to the constrained minimal point in no more thann ?r descent steps, wherer is the number of linearly independent constraints. The algorithms are then modified so that they can also be employed for a nonquadratic function subject to a nonlinear constraint. Some particular algorithms are tested through several numerical examples.  相似文献   

6.
The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we propose a globally convergent Polak-Ribière-Polyak (PRP)conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic model Φ(t):= f(xk)+tgTkdk+1/2t2dTkQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.  相似文献   

8.
Some feasible direction methods for the minimization of a linearly constrained convex function are studied. Special emphasis is placed on the analysis of the procedures which find the search direction, by developing active set methods which use orthogonal or Gauss-Jordan-like transformations.Numerical experiments are performed on a class of quadratic problems depending on two parameters, related to the conditioning of the matrix associated with the quadratic form and the matrix of active constraints at the optimal point. Results are given for the rate of convergence and the average iteration time.This research was partially supported by the Progetto Finalizzato Informatica, CNR, Rome, Italy.  相似文献   

9.
The family of feasible methods for minimization with nonlinear constraints includes the nonlinear projected gradient method, the generalized reduced gradient method (GRG), and many variants of the sequential gradient restoration algorithm (SGRA). Generally speaking, a particular iteration of any of these methods proceeds in two phases. In the restoration phase, feasibility is restored by means of the resolution of an auxiliary nonlinear problem, generally a nonlinear system of equations. In the minimization phase, optimality is improved by means of the consideration of the objective function, or its Lagrangian, on the tangent subspace to the constraints. In this paper, minimal assumptions are stated on the restoration phase and the minimization phase that ensure that the resulting algorithm is globally convergent. The key point is the possibility of comparing two successive nonfeasible iterates by means of a suitable merit function that combines feasibility and optimality. The merit function allows one to work with a high degree of infeasibility at the first iterations of the algorithm. Global convergence is proved and a particular implementation of the model algorithm is described.  相似文献   

10.
In this paper, we show that an analogue of the classical conjugate gradient method converges linearly when applied to solving the problem of unconstrained minimization of a strictly convex quadratic spline. Since a strictly convex quadratic program with simple bound constraints can be reformulated as unconstrained minimization of a strictly convex quadratic spline, the conjugate gradient method is used to solve the unconstrained reformulation and find the solution of the original quadratic program. In particular, if the solution of the original quadratic program is nondegenerate, then the conjugate gradient method finds the solution in a finite number of iterations. This author's research is partially supported by the NASA/Langley Research Center under grant NCC-1-68 Supplement-15.  相似文献   

11.
A characterization of the minimizer of a function   总被引:1,自引:0,他引:1  
This paper is intended to give a characterization of the minimum point of a function in terms of the gradient of the function at some other point using some concepts from differential geometry. The function is assumed to have continuous partial derivatives up to and including order four. It is also assumed to have a positive-definite Hessian matrix onR n and a unique minimum point.  相似文献   

12.
Two quadratically convergent gradient methods for minimizing an unconstrained function of several variables are examined. The heart of the Fletcher and Powell reformulation of Davidon's method is a variableH-matrix. The eigenvalues and eigenvectors of this matrix for a quadratic function are explored, leading to a proof that the gradient vectors at each step are mutually orthogonal. From this, a geometric interpretation of theH-matrix in terms of the projection of the gradient into a solution subspace is derived. These properties are then used to arrive at the main result, which states that, for a quadratic function, the direction vectors generated by the Davidon algorithm and the conjugate-gradient algorithm of Hestenes and Stiefel are scalar multiples of each other, provided the initial step each takes is in the direction of steepest descent. If one assumes no round-off error and a perfect one-dimensional search, the methods generate identical steps leading to the minimum.It is also shown that, for a quadratic function, the Davidon algorithm has a simplified version which searches in the same directions. However, the unique advantage of the original scheme, that it yields the curvature of the function at the minimum, is sacrificed for simplicity.Although these results apply to the case of a quadratic function, a comparative study of the same algorithms for a general function can be found in a companion paper.This research was carried out under Contract No. NAS 9-4036, NASA-Manned Spacecraft Center, Houston, Texas. The author is indebted to Dr. H. J. Kelley, whose suggestions and encouragement provided the inspiration for this paper.  相似文献   

13.
A basic algorithm for the minimization of a differentiable convex function (in particular, a strictly convex quadratic function) defined on the convex hull of m points in R n is outlined. Each iteration of the algorithm is implemented in barycentric coordinates, the number of which is equal to m. The method is based on a new procedure for finding the projection of the gradient of the objective function onto a simplicial cone in R m , which is the tangent cone at the current point to the simplex defined by the usual constraints on barycentric coordinates. It is shown that this projection can be computed in O(m log m) operations. For strictly convex quadratic functions, the basic method can be refined to a noniterative method terminating with the optimal solution.  相似文献   

14.
The convergence properties of the Davidon-Fletcher-Powell method when applied to the minimization of convex functions are considered for the case where the one-dimensional minimization required at each iteration is not solved exactly. Conditions on the error incurred at each iteration are given which are sufficient for the original method to have a linear or superlinear rate of convergence, and for the restarted version to have ann-step quadratic rate of convergence.Sponsored by the United States Army under Contract No. DA-31-124-ARO-D-462.  相似文献   

15.
The low rank solution of the Q‐weighted nearest correlation matrix problem is studied in this paper. Based on the property of Q‐weighted norm and the Gramian representation, we first reformulate the Q‐weighted nearest correlation matrix problem as a minimization problem of the trace function with quadratic constraints and then prove that the solution of the minimization problem the trace function is the stationary point of the original problem if it is rank‐deficient. Finally, the nonmonotone spectral projected gradient method is constructed to solve them. Numerical examples illustrate that the new method is feasible and effective. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Quadratically convergent algorithms and one-dimensional search schemes   总被引:5,自引:0,他引:5  
In this paper, the performances of three quadratically convergent algorithms coupled with four one-dimensional search schemes are studied through several nonquadratic examples. The algorithms are the rank-one algorithm (Algorithm I), the projection algorithm (Algorithm II), and the Fletcher-Reeves algorithm (Algorithm III). The search schemes are the exact quadratic search (EQS), the exact cubic search (ECS), the approximate quadratic search (AQS), and the approximate cubic search (ACS). The performances are analyzed in terms of number of iterations and number of equivalent function evaluations for convergence. From the numerical experiments, the following conclusions are found: (a) while the number of iterations generally increases by relaxing the search stopping condition, the number of equivalent function evaluations decreases; therefore, approximate searches should be preferred to exact searches; (b) the numbers of iterations for ACS, ECS, and EQS are about the same; therefore, the use of more sophisticated, higher order search schemes is not called for; the present ACS scheme, modified so that only the function, instead of the gradient, is used in bracketing the minimal point, could prove to be most desirable in terms of the number of equivalent function evaluations; (c) for Algorithm I, ACS and AQS yield almost identical results; it is believed that further improvements in efficiency are possible if one uses a fixed stepsize approach, thus bypassing the one-dimensional search completely; (d) the combination of Algorithm II and ACS exhibits high efficiency in treating functions whose order is higher than two and whose Hessian at the minimal point is singular; and (f) Algorithm III, even with the best search scheme, is inefficient in treating functions with flat bottoms; it is doubtful that the simplicity of its update will compensate for its inefficiency in such pathological cases.This research was supported by the National Science Foundation, Grant No. 32453.  相似文献   

17.
We propose two linearly convergent descent methods for finding a minimizer of a convex quadratic spline and establish global error estimates for the iterates. One application of such descent methods is to solve convex quadratic programs, since they can be reformulated as problems of unconstrained minimization of convex quadratic splines. In particular, we derive several new linearly convergent algorthms for solving convex quadratic programs. These algorithms could be classified as row-action methods, matrix-splitting methods, and Newton-type methods.  相似文献   

18.
There is a family of gradient algorithms (Broyden, 1970) thatincludes many useful methods for calculating the least valueof a function F(x), and some of these algorithms have been extendedto solve linearly constrained problems (Fletcher, 1971). Somenew and fundamental properties of these algorithms are given,in the case that F(x) is a positive definite quadratic function.In particular these properties are relevant to the case whenonly some of the iterations of an algorithm make a completelinear search. They suggest that Goldfarb's (1969) algorithmfor linearly constrained problems has excellent quadratic terminationproperties, and it is proved that these properties are betterthan has been stated in previously published papers. Also anew technique is identified for unconstrained minimization withoutlinear searches.  相似文献   

19.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

20.
We are interested in the quadratic eigenvalue problem of damped oscillations where the damping matrix has dimension one. This describes systems with one point damper. A generic example is a linearn-mass oscillator fixed on one end and damped on the other end. We prove that in this case the system parameters (mass and spring constants) are uniquely (up to a multiplicative constant) determined by any given set of the eigenvalues in the left half plane. We also design an effective construction of the system parameters from the spectral data. We next propose an efficient method for solving the Ljapunov equation generated by arbitrary stiffness and mass matrices and a one dimensional damping matrix. The method is particularly efficient if the Ljapunov equation has to be solved many times where only the damping dyadic is varied. In particular, the method finds an optimal position of a damper in some 60n 3 operations. We apply this method to our generic example and show, at least numerically, that the damping is optimal (in the sense that the solution of a corresponding Ljapunov equation has a minimal trace) if all eigenvalues are brought together. We include some perturbation results concerning the damping factor as the varying parameter. The results are hoped to be of some help in studying damping matrices of the rank much smaller than the dimension of the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号