首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient algorithms but not so good as in a quasi-Newton approach. The new algorithm uses a strategy of generating a form of conjugate gradient search direction for most iterations, but it periodically uses a quasi-Newton step to improve the convergence. Some theoretical background for a new algorithm has been presented in an earlier paper; here we examine properties of the new algorithm and its implementation. We also present the results of some computational experience.This research was supported by the National Research Council of Canada grant number A-8962.  相似文献   

2.
We consider the effect of approximation on performance of quasi-Newton methods for infinite dimensional problems. In particular we study methods in which the approximation is refined at each iterate. We show how the local convergence behavior of the quasi-Newton method in the infinite dimensional setting is affected by the refinement strategy. Applications to boundary value problems and integral equations are considered.The research of this author was supported by NSF grant DMS-8601139 and AFOSR grant AFOSR-ISSA-860074.  相似文献   

3.
Quasi-Newton algorithms minimize a functionF(x),xR n, searching at any iterationk along the directions k=?H kgk, whereg k=?F(x k) andH k approximates in some sense the inverse Hessian ofF(x) atx k. When the matrixH is updated according to the formulas in Broyden's family and when an exact line search is performed at any iteration, a compact algorithm (free from the Broyden's family parameter) can be conceived in terms of the followingn ×n matrix: $$H{_R} = H - Hgg{^T} H/g{^T} Hg,$$ which can be viewed as an approximating reduced inverse Hessian. In this paper, a new algorithm is proposed which uses at any iteration an (n?1)×(n?1) matrixK related toH R by $$H_R = Q\left[ {\begin{array}{*{20}c} 0 & 0 \\ 0 & K \\ \end{array} } \right]Q$$ whereQ is a suitable orthogonaln×n matrix. The updating formula in terms of the matrixK incorporated in this algorithm is only moderately more complicated than the standard updating formulas for variable-metric methods, but, at the same time, it updates at any iteration a positive definite matrixK, instead of a singular matrixH R. Other than the compactness with respect to the algorithms with updating formulas in Broyden's class, a further noticeable feature of the reduced Hessian algorithm is that the downhill condition can be stated in a simple way, and thus efficient line searches may be implemented.  相似文献   

4.
In this work some interesting relations between results on basic optimization and algorithms for nonconvex functions (such as BFGS and secant methods) are pointed out. In particular, some innovative tools for improving our recent secant BFGS-type and algorithms are described in detail.  相似文献   

5.
Nonsmooth optimization via quasi-Newton methods   总被引:1,自引:0,他引:1  
We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f, not necessarily convex. We introduce an inexact line search that generates a sequence of nested intervals containing a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Furthermore, the line search is guaranteed to terminate if f is semi-algebraic. It seems quite difficult to establish a convergence theorem for quasi-Newton methods applied to such general classes of functions, so we give a careful analysis of a special but illuminating case, the Euclidean norm, in one variable using the inexact line search and in two variables assuming that the line search is exact. In practice, we find that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke stationary value. We give references documenting the successful use of BFGS in a variety of nonsmooth applications, particularly the design of low-order controllers for linear dynamical systems. We conclude with a challenging open question.  相似文献   

6.
Multistep quasi-Newton optimization methods use data from more than one previous step to construct the current Hessian approximation. These methods were introduced in [3, 4] where it is shown how to construct such methods by means of interpolating curves. To obtain a better parametrization of the interpolation, Ford [2] developed the idea of “implicit” methods. In this paper, we describe a derivation of new implicit updates which are similar to methods I4 and I5 created in [17]. The experimental results presented here show that both of the new methods produce better performance than the existing methods, especially as the dimension of the test problem grows.  相似文献   

7.
Multi-step quasi-Newton methods for optimization   总被引:4,自引:0,他引:4  
Quasi-Newton methods update, at each iteration, the existing Hessian approximation (or its inverse) by means of data deriving from the step just completed. We show how “multi-step” methods (employing, in addition, data from previous iterations) may be constructed by means of interpolating polynomials, leading to a generalization of the “secant” (or “quasi-Newton”) equation. The issue of positive-definiteness in the Hessian approximation is addressed and shown to depend on a generalized version of the condition which is required to hold in the original “single-step” methods. The results of extensive numerical experimentation indicate strongly that computational advantages can accrue from such an approach (by comparison with “single-step” methods), particularly as the dimension of the problem increases.  相似文献   

8.
The parallel quasi-Newton method based on updating conjugate subspaces proposed in [4] can be very effective for large-scale sparse minimization because conjugate subspaces with respect to sparse Hessians are usually easy to obtain. We demonstrate this point in this paper for the partially separable case with matrices updated by a quasi-Newton scheme ofGriewank andToint [2,3]. The algorithm presented is suitable for parallel computation and economical in computer storage. Some testing results of the algorithm on an Alliant FX/8 minisupercomputer are reported.The material is based on work supported in part by the National Science Foundation under Grant No. DMS 8602419 and by the Center for Supercomputing Research and Development at the University of Illinois.  相似文献   

9.
It is shown that algorithms for minimizing an unconstrained functionF(x), x E n , which are solely methods of conjugate directions can be expected to exhibit only ann or (n–1) step superlinear rate of convergence to an isolated local minimizer. This is contrasted with quasi-Newton methods which can be expected to exhibit every step superlinear convergence. Similar statements about a quadratic rate of convergence hold when a Lipschitz condition is placed on the second derivatives ofF(x). Research was supported in part by Army Research Office, Contract Number DAHC 19-69-C-0017 and the Office of Naval Research, Contract Number N00014-71-C-0116 (NR 047-99).  相似文献   

10.
In this paper, we propose new members of the Broyden family of quasi-Newton methods. We develop, on the basis of well-known least-change results for the BFGS and DFP updates, a measure for the Broyden family which seeks to take into account the change in both the Hessian approximation and its inverse. The proposal is then to choose the formula which gives the least value of this measure in terms of the two parameters available, and hence to produce an update which is optimal in the sense of the given measure. Several approaches to the problem of minimizing the measure are considered, from which new updates are obtained. In particular, one approach yields a new variational result for the Davidon optimally conditioned method and another yields a reasonable modification to this method. The paper is also concerned with the possibility of estimating, in a certain sense, the size of the eigenvalues of the Hessian approximation on the basis of two available scalars. This allows one to derive further modifications to the above-mentioned methods. Comparisons with the BFGS and Davidson methods are made on a set of standard test problems that show promising results for certain new methods.Part of this work was done during the author's visits at International Centre for Theoretical Physics, Trieste, Italy, at Systems Department, University of Calabria, Cosenza, Italy, and at Ajman University College of Science and Technology, Ajman, United Arab Emirates.The author expresses his gratitude to Professor L. Grandinetti for his encouragement and thanks the anonymous referees for their careful reading of an earlier draft of the paper and valuable comments, which led to a substantial improvement of the original paper.  相似文献   

11.
This paper proposes a new algorithm to solve nonsmooth multiobjective programming. The algorithm is a descent direction method to obtain the critical point (a necessary condition for Pareto optimality). We analyze both global and local convergence results under some assumptions. Numerical tests are also given.  相似文献   

12.
Two approaches to quasi-Newton methods for constrained optimization problems inR n are presented. These approaches are based on a class of Lagrange multiplier approximation formulas used by the author in his previous work on Newton's method for constrained problems. The first approach is set in the framework of a diagonalized multiplier method. From this point of view, a new update rule for the Lagrange multipliers which depends on the particular quasi-Newton method employed is given. This update rule, in contrast to most other update rules, does not require exact minimization of the intermediate unconstrained problem. In fact, the optimal convergence rate is attained in the extreme case when only one step of a quasi-Newton method is taken on this intermediate problem. The second approach transforms the constrained optimization problem into an unconstrained problem of the same dimension.The author would like to thank J. Moré and M. J. D. Powell for comments related to the material in Section 13. He also thanks J. Nocedal for the computer results in Tables 1–3 and M. Wright for the results in Table 4, which were obtained via one of her general programs. Discussions with M. R. Hestenes and A. Miele regarding their contributions to this area were very helpful. Many individuals, including J. E. Dennis, made useful general comments at various stages of this paper. Finally, the author is particularly thankful to R. Byrd, M. Heath, and R. McCord for reading the paper in detail and suggesting many improvements.This work was supported by the Energy Research and Development Administration, Contract No. E-(40-1)-5046, and was performed in part while the author was visiting the Department of Operations Research, Stanford University, Stanford, California.  相似文献   

13.
We discuss methods for solving the unconstrained optimization problem on parallel computers, when the number of variables is sufficiently small that quasi-Newton methods can be used. We concentrate mainly, but not exclusively, on problems where function evaluation is expensive. First we discuss ways to parallelize both the function evaluation costs and the linear algebra calculations in the standard sequential secant method, the BFGS method. Then we discuss new methods that are appropriate when there are enough processors to evaluate the function, gradient, and part but not all of the Hessian at each iteration. We develop new algorithms that utilize this information and analyze their convergence properties. We present computational experiments showing that they are superior to parallelization either the BFGS methods or Newton's method under our assumptions on the number of processors and cost of function evaluation. Finally we discuss ways to effectively utilize the gradient values at unsuccessful trial points that are available in our parallel methods and also in some sequential software packages.Research supported by AFOSR grant AFOSR-85-0251, ARO contract DAAG 29-84-K-0140, NSF grants DCR-8403483 and CCR-8702403, and NSF cooperative agreement DCR-8420944.  相似文献   

14.
Sebastian Schlenkrich  Andrea Walther 《PAMM》2007,7(1):2020091-2020092
In this paper the concepts of partitioned quasi-Newton methods are applied to adjoint Broyden updates. Consequently a corresponding partitioned adjoint Broyden update is presented and local convergence results are given. Numerical results compare the partitioned adjoint Broyden update methods to the corresponding unpartitioned quasi-Newton method and to Newton's method for nonlinear equations. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Mathematical Programming - In this paper, we study and prove the non-asymptotic superlinear convergence rate of the Broyden class of quasi-Newton algorithms which includes the...  相似文献   

16.
We consider quasi-Newton methods for generalized equations in Banach spaces under metric regularity and give a sufficient condition for q-linear convergence. Then we show that the well-known Broyden update satisfies this sufficient condition in Hilbert spaces. We also establish various modes of q-superlinear convergence of the Broyden update under strong metric subregularity, metric regularity and strong metric regularity. In particular, we show that the Broyden update applied to a generalized equation in Hilbert spaces satisfies the Dennis–Moré condition for q-superlinear convergence. Simple numerical examples illustrate the results.  相似文献   

17.
In this paper, we present two partitioned quasi-Newton methods for solving partially separable nonlinear equations. When the Jacobian is not available, we propose a partitioned Broyden’s rank one method and show that the full step partitioned Broyden’s rank one method is locally and superlinearly convergent. By using a well-defined derivative-free line search, we globalize the method and establish its global and superlinear convergence. In the case where the Jacobian is available, we propose a partitioned adjoint Broyden method and show its global and superlinear convergence. We also present some preliminary numerical results. The results show that the two partitioned quasi-Newton methods are effective and competitive for solving large-scale partially separable nonlinear equations.  相似文献   

18.
Local convergence of quasi-Newton methods for B-differentiable equations   总被引:7,自引:0,他引:7  
We study local convergence of quasi-Newton methods for solving systems of nonlinear equations defined by B-differentiable functions. We extend the classical linear and superlinear convergence results for general quasi-Newton methods as well as for Broyden's method. We also show how Broyden's method may be applied to nonlinear complementarity problems and illustrate its computational performance on two small examples.  相似文献   

19.
We present a unified technique for updating approximations to Jacobian or Hessian matrices when any linear structure can be imposed. The updates are derived by variational means, where an operator-weighted Frobenius norm is used, and are finally expressed as solutions of linear equations and/or unconstrained extrema. A certain behavior of the solutions is discussed for certain perturbations of the operator and the constraints. Multiple secant relations are then considered. For the nonsparse case, an explicit family of updates is obtained including Broyden, DFP, and BFGS. For the case where some of the matrix elements are prescribed, explicit solutions are obtained if certain conditions are satisfied. When symmetry is assumed, we show, in addition, the connection with the DFP and BFGS updates.This work was partially supported by a grant from Control Data  相似文献   

20.
Mathematical Programming - We study the local convergence of classical quasi-Newton methods for nonlinear optimization. Although it was well established a long time ago that asymptotically these...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号