首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
This paper surveys some of the existing approaches to quasi-Newton methods and introduces a new way for constructing inverse Hessian approximations for such algorithms. This new approach is based on restricting Newton's method to subspaces over which the inverse Hessian is assumed to be known, while expanding this subspace using gradient information. It is shown that this approach can lead to some well-known formulas for updating the inverse Hessian approximation. Deriving such updates through this approach provides new understanding of these formulas and their relation to the pseudo-Newton-Raphson algorithm.  相似文献   

2.
The authors have derived what they termed quasi-Newton multi step methods in [2]. These methods have demonstrated substantial numerical improvements over the standard single step Secant-based BFGS. Such methods use a variant of the Secant equation that the updated Hessian (or its inverse) satisfies at each iteration. In this paper, new methods will be explored for which the updated Hessians satisfy multiple relations of the Secant-type. A rational model is employed in developing the new methods. The model hosts a free parameter which is exploited in enforcing symmetry on the updated Hessian approximation matrix thus obtained. The numerical performance of such techniques is then investigated and compared to other methods. Our results are encouraging and the improvements incurred supercede those obtained from other existing methods at minimal extra storage and computational overhead.  相似文献   

3.
We describe an enhanced version of the primal-dual interior point algorithm in Lasdon, Plummer, and Yu (ORSA Journal on Computing, vol. 7, no. 3, pp. 321–332, 1995), designed to improve convergence with minimal loss of efficiency, and designed to solve large sparse nonlinear problems which may not be convex. New features include (a) a backtracking linesearch using an L1 exact penalty function, (b) ensuring that search directions are downhill for this function by increasing Lagrangian Hessian diagonal elements when necessary, (c) a quasi-Newton option, where the Lagrangian Hessian is replaced by a positive definite approximation (d) inexact solution of each barrier subproblem, in order to approach the central trajectory as the barrier parameter approaches zero, and (e) solution of the symmetric indefinite linear Newton equations using a multifrontal sparse Gaussian elimination procedure, as implemented in the MA47 subroutine from the Harwell Library (Rutherford Appleton Laboratory Report RAL-95-001, Oxfordshire, UK, Jan. 1995). Second derivatives of all problem functions are required when the true Hessian option is used. A Fortran implementation is briefly described. Computational results are presented for 34 smaller models coded in Fortran, where first and second derivatives are approximated by differencing, and for 89 larger GAMS models, where analytic first derivatives are available and finite differencing is used for second partials. The GAMS results are, to our knowledge, the first to show the performance of this promising class of algorithms on large sparse NLP's. For both small and large problems, both true Hessian and quasi- Newton options are quite reliable and converge rapidly. Using the true Hessian, INTOPT is as reliable as MINOS on the GAMS models, although not as reliable as CONOPT. Computation times are considerably longer than for the other 2 solvers. However, interior point methods should be considerably faster than they are here when analytic second derivatives are available, and algorithmic improvements and problem preprocessing should further narrow the gap.  相似文献   

4.
Multi-step quasi-Newton methods for optimization   总被引:4,自引:0,他引:4  
Quasi-Newton methods update, at each iteration, the existing Hessian approximation (or its inverse) by means of data deriving from the step just completed. We show how “multi-step” methods (employing, in addition, data from previous iterations) may be constructed by means of interpolating polynomials, leading to a generalization of the “secant” (or “quasi-Newton”) equation. The issue of positive-definiteness in the Hessian approximation is addressed and shown to depend on a generalized version of the condition which is required to hold in the original “single-step” methods. The results of extensive numerical experimentation indicate strongly that computational advantages can accrue from such an approach (by comparison with “single-step” methods), particularly as the dimension of the problem increases.  相似文献   

5.
The paper examines methods for increasing the dimension of a quasi-Newton approximation to a Hessian matrix when the dimension of the problem is increased. A new method is proposed, and numerical results given to demonstrate the superiority of the new method to existing methods.The work of the first author was partially supported by the Air Force Office of Scientific Research under Grant No. AFOSR-86-0170.  相似文献   

6.
Quasi-Newton methods are generally held to be the most efficient minimization methods for small to medium sized problems. From these the symmetric rank one update of Broyden (Math. Comp., vol. 21, pp. 368–381, 1967) has been disregarded for a long time because of its potential failure. The work of Conn, Gould and Toint (Math. Prog., vol. 50, pp. 177–195, 1991), Kelley and Sachs (COAP, vol. 9, pp. 43–64, 1998) and Khalfan, Byrd and Schnabel (SIOPT, vol. 3, pp. 1–24, 1993; SIOPT, vol. 6, pp. 1025–1039, 1996) has renewed the interest in this method. However the question of boundedness of the generated matrix sequence has not been resolved by this work. In the present paper it is shown that a slightly modified version of this update generates bounded updates and converges superlinearly for uniformly convex functions. Numerical results support these theoretical considerations.  相似文献   

7.
Multistep quasi-Newton optimization methods use data from more than one previous step to construct the current Hessian approximation. These methods were introduced in [3, 4] where it is shown how to construct such methods by means of interpolating curves. To obtain a better parametrization of the interpolation, Ford [2] developed the idea of “implicit” methods. In this paper, we describe a derivation of new implicit updates which are similar to methods I4 and I5 created in [17]. The experimental results presented here show that both of the new methods produce better performance than the existing methods, especially as the dimension of the test problem grows.  相似文献   

8.
《Optimization》2012,61(12):2229-2246
ABSTRACT

A secant equation (quasi-Newton) has one of the most important rule to find an optimal solution in nonlinear optimization. Curvature information must satisfy the usual secant equation to ensure positive definiteness of the Hessian approximation. In this work, we present a new diagonal updating to improve the Hessian approximation with a modifying weak secant equation for the diagonal quasi-Newton (DQN) method. The gradient and function evaluation are utilized to obtain a new weak secant equation and achieve a higher order accuracy in curvature information in the proposed method. Modified DQN methods based on the modified weak secant equation are globally convergent. Extended numerical results indicate the advantages of modified DQN methods over the usual ones and some classical conjugate gradient methods.  相似文献   

9.
A new class of quasi-Newton methods is introduced that can locate a unique stationary point of ann-dimensional quadratic function in at mostn steps. When applied to positive-definite or negative-definite quadratic functions, the new class is identical to Huang's symmetric family of quasi-Newton methods (Ref. 1). Unlike the latter, however, the new family can handle indefinite quadratic forms and therefore is capable of solving saddlepoint problems that arise, for instance, in constrained optimization. The novel feature of the new class is a planar iteration that is activated whenever the algorithm encounters a near-singular direction of search, along which the objective function approaches zero curvature. In such iterations, the next point is selected as the stationary point of the objective function over a plane containing the problematic search direction, and the inverse Hessian approximation is updated with respect to that plane via a new four-parameter family of rank-three updates. It is shown that the new class possesses properties which are similar to or which generalize the properties of Huang's family. Furthermore, the new method is equivalent to Fletcher's (Ref. 2) modified version of Luenberger's (Ref. 3) hyperbolic pairs method, with respect to the metric defined by the initial inverse Hessian approximation. Several issues related to implementing the proposed method in nonquadratic cases are discussed.An earlier version of this paper was presented at the 10th Mathematical Programing Symposium, Montreal, Canada, 1979.  相似文献   

10.
A quasi-Newton trust-region method   总被引:1,自引:0,他引:1  
The classical trust-region method for unconstrained minimization can be augmented with a line search that finds a point that satisfies the Wolfe conditions. One can use this new method to define an algorithm that simultaneously satisfies the quasi-Newton condition at each iteration and maintains a positive-definite approximation to the Hessian of the objective function. This new algorithm has strong global convergence properties and is robust and efficient in practice.  相似文献   

11.
In this paper, the classical Gauss-Newton method for the unconstrained least squares problem is modified by introducing a quasi-Newton approximation to the second-order term of the Hessian. Various quasi-Newton formulas are considered, and numerical experiments show that most of them are more efficient on large residual problems than the Gauss-Newton method and a general purpose minimization algorithm based upon the BFGS formula. A particular quasi-Newton formula is shown numerically to be superior. Further improvements are obtained by using a line search that exploits the special form of the function.  相似文献   

12.
This paper develops a modified quasi-Newton method for structured unconstrained optimization with partial information on the Hessian, based on a better approximation to the Hessian in current search direction. The new approximation is decided by both function values and gradients at the last two iterations unlike the original one which only uses the gradients at the last two iterations. The modified method owns local and superlinear convergence. Numerical experiments show that the proposed method is encouraging comparing with the methods proposed in [4] for structured unconstrained optimization Presented at the 6th International Conference on Optimization: Techniques and Applications, Ballarat, Australia, December 9–11, 2004  相似文献   

13.
Quasi-Newton equations play a central role in quasi-Newton methods for optimization and various quasi-Newton equations are available. This paper gives a survey on these quasi-Newton equations and studies properties of quasi-Newton methods with updates satisfying different quasi-Newton equations. These include single-step quasi-Newton equations that use only gradient information and that use both gradient and function value information in one step, and multi-step quasi-Newton equations that use the gradient information in last m steps. Main properties of quasi-Newton methods with updates satisfying different quasi-Newton equations are studied. These properties include the finite termination property, invariance, heredity of positive definite updates, consistency of search directions, global convergence and local superlinear convergence properties.  相似文献   

14.
We consider Hessian approximation schemes for large-scale unconstrained optimization in the context of discretized problems. The considered Hessians typically present a nontrivial sparsity and partial separability structure. This allows iterative quasi-Newton methods to solve them despite of their size. Structured finite-difference methods and updating schemes based on the secant equation are presented and compared numerically inside the multilevel trust-region algorithm proposed by Gratton et al. (IMA J. Numer. Anal. 28(4):827–861, 2008).  相似文献   

15.
This paper is aimed to extend the scheme of self scaling, appropriate for the quasi-Newton methods, to the two-step quasi-Newton methods. The scaling scheme has been performed during the main approach of updating the current Hessian approximation and prior to the computation of the next quasi-Newton direction whenever necessary. Global convergence property of the new method is explored on uniformly convex functions with the standard Wolfe line search. Preliminary numerical testing has been performed showing that this technique improves the performance of the two-step method substantially.  相似文献   

16.
1.IntroductionThispaPerdealswiththeproblemofndniedingasumofsquaresofnonlinearfuntions-mwhereri(x),i==1,2,',maretwicecontinuouslyfferentiable,m2n'r(x)=(rl(x),rz(x),'jbe(x))"and"T"denotestranspose.NoIilinearleastSquaresproblemisakindofAnportan0ptiedationprobletnsandisaPpearedinmanyfield8suchasscientilicexperiments,mbomumlikelihoodestimation,solutionofnonlinearequaions'patternrecoghtionandetc.ThederiVativesofthefUnctionj(x)aregivenbywhereAEppxnistheJacobianmatrisofr(x)anditselementsare~=f…  相似文献   

17.
For solving unconstrained minimization problems, quasi-Newton methods are popular iterative methods. The secant condition which employs only the gradient information is imposed on these methods. Several researchers paid attention to other secant conditions to get a better approximation of the Hessian matrix of the objective function. Recently, Zhang et al. [New quasi-Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl. 102 (1999) 147–167] and Zhang and Xu [Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations, J. Comput. Appl. Math. 137 (2001) 269–278] proposed the modified secant condition which uses both gradient and function value information in order to get a higher order accuracy in approximating the second curvature of the objective function. They showed the local and q-superlinear convergence property of the BFGS-like and DFP-like updates based on their proposed secant condition. In this paper, we incorporate one parameter into this secant condition to smoothly switch the standard secant condition and the secant condition of Zhang et al. We consider a modified Broyden family which includes the BFGS-like and the DFP-like updates proposed by Zhang et al. We prove the local and q-superlinear convergence of our method.  相似文献   

18.
《Optimization》2012,61(3):375-389
In this paper we consider two alternative choices for the factor used to scale the initial Hessian approximation, before updating by a member of the Broyden family of updates for quasi-Newton optimization methods. By extensive computational experiments carried out on a set of standard test problems from the CUTE collection, using efficient implemen-tations of the quasi-Newton method, we show that the proposed new scaling factors are better, in terms of efficiency achieved (number of iterations, number of function and gradient evaluations), than the standard choice proposed in the literature  相似文献   

19.
Nonsmooth optimization problems are divided into two categories. The first is composite nonsmooth problems where the generalized gradient can be approximated by information available at the current point. The second is basic nonsmooth problems where the generalized gradient must be approximated using information calculated at previous iterates.Methods for minimizing composite nonsmooth problems where the nonsmooth function is made up from a finite number of smooth functions, and in particular max functions, are considered. A descent method which uses an active set strategy, a nonsmooth line search, and a quasi-Newton approximation to the reduced Hessian of a Lagrangian function is presented. The Theoretical properties of the method are discussed and favorable numerical experience on a wide range of test problems is reported.This work was carried out at the University of Dundee from 1976–1979 and at the University of Kentucky at Lexington from 1979–1980. The provision of facilities in both universities is gratefully acknowledged, as well as the support of NSF Grant No. ECS-79-23272 for the latter period. The first author also wishes to acknowledge financial support from a George Murray Scholarship from the University of Adelaide and a University of Dundee Research Scholarship for the former period.  相似文献   

20.
We consider the symmetric rank-one, quasi-Newton formula. The hereditary properties of this formula do not require quasi-Newton directions of search. Therefore, this formula is easy to use in constrained optimization algorithms; no explicit projections of either the Hessian approximations or the parameter changes are required. Moreover, the entire Hessian approximation is available at each iteration for determining the direction of search, which need not be a quasi-Newton direction. Theoretical difficulties, however, exist. Even for a positive-definite, quadratic function with no constraints, it is possible that the symmetric rank-one update may not be defined at some iteration. In this paper, we first demonstrate that such failures of definition correspond to either losses of independence in the directions of search being generated or to near-singularity of the Hessian approximation being generated. We then describe a procedure that guarantees that these updates are well-defined for any nonsingular quadratic function. This procedure has been incorporated into an algorithm for minimizing a function subject to box constraints. Box constraints arise naturally in the minimization of a function with many minima or a function that is defined only in some subregion of the space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号