首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
Min Li 《Optimization Letters》2018,12(8):1911-1927
Based on the memoryless BFGS quasi-Newton method, a family of three-term nonlinear conjugate gradient methods are proposed. For any line search, the directions generated by the new methods are sufficient descent. Using some efficient techniques, global convergence results are established when the line search fulfills the Wolfe or the Armijo conditions. Moreover, the r-linear convergence rate of the methods are analyzed as well. Numerical comparisons show that the proposed methods are efficient for the unconstrained optimization problems in the CUTEr library.  相似文献   

2.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   

3.
Nonsmooth optimization via quasi-Newton methods   总被引:1,自引:0,他引:1  
We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f, not necessarily convex. We introduce an inexact line search that generates a sequence of nested intervals containing a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Furthermore, the line search is guaranteed to terminate if f is semi-algebraic. It seems quite difficult to establish a convergence theorem for quasi-Newton methods applied to such general classes of functions, so we give a careful analysis of a special but illuminating case, the Euclidean norm, in one variable using the inexact line search and in two variables assuming that the line search is exact. In practice, we find that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke stationary value. We give references documenting the successful use of BFGS in a variety of nonsmooth applications, particularly the design of low-order controllers for linear dynamical systems. We conclude with a challenging open question.  相似文献   

4.
The BFGS method is one of the most effective quasi-Newton algorithms for optimization problems. However, its global convergence for general functions is still open. In this paper, under a new line search technique, this problem is solved, and it is shown that other methods in the Broyden class also possess this property. Moreover, the global convergence of the PRP method is established in the case of this new line search. Numerical results are reported to show that the new line search technique is competitive to that of the normal line search.  相似文献   

5.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

6.
The present study is an attempt to extend Barzilai and Borwein’s method for dealing with unconstrained single objective optimization problems to multiobjective ones. As compared with Newton, Quasi-Newton and steepest descent multi-objective optimization methods, Barzilai and Borwein multiobjective optimization (BBMO) method requires simple and quick calculations in that it makes no use of the line search methods like the Armijo rule that necessitates function evaluations at each iteration. It goes without saying that the innovative aspect of the current study is due to the use of no function evaluations in comparison with other multi-objective optimization non-parametric methods (e.g. Newton, Quasi-Newton and steepest descent methods, to name a few) that have been investigated so far. Also, the convergence of the BBMO method for the objective functions assumed to be twice continuously differentiable has been proved. MATLAB software was utilized to implement the BBMO method, and the results were compared with the other methods mentioned earlier. Using some performance assessment, the quality of nondominated frontier of BBMO was analogized to above mentioned methods. In addition, the approximate nondominated frontiers gained from the methods were compared with the exact nondominated frontier for some problems. Also, performance profiles are considered to visualize numerical results presented in tables.  相似文献   

7.
Yuan  Gonglin  Li  Pengyuan  Lu  Junyu 《Numerical Algorithms》2022,91(1):353-365
Numerical Algorithms - The BFGS method, which has great numerical stability, is one of the quasi-Newton line search methods. However, the global convergence of the BFGS method with a Wolfe line...  相似文献   

8.
Quasi-Newton methods in conjunction with the piecewise sequential quadratic programming are investigated for solving mathematical programming with equilibrium constraints, in particular for problems with complementarity constraints. Local convergence as well as superlinear convergence of these quasi-Newton methods can be established under suitable assumptions. In particular, several well-known quasi-Newton methods such as BFGS and DFP are proved to exhibit the local and superlinear convergence.  相似文献   

9.
Quasi-Newton method is a well-known effective method for solving optimization problems. Since it is a line search method, which needs a line search procedure after determining a search direction at each iteration, we must decide a line search rule to choose a step size along a search direction. In this paper, we propose a new inexact line search rule for quasi-Newton method and establish some global convergent results of this method. These results are useful in designing new quasi-Newton methods. Moreover, we analyze the convergence rate of quasi-Newton method with the new line search rule.  相似文献   

10.
基于无约束单目标记忆梯度法,本文提出了一种无约束多目标优化问题的记忆梯度法,并证明了算法在Armijo线性搜索下的收敛性。数据试验结果验证了该算法的有效性。  相似文献   

11.
12.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

13.
We present modifications of the generalized conjugate gradient algorithm of Liu and Storey for unconstrained optimization problems (Ref. 1), extending its applicability to situations where the search directions are not defined. The use of new search directions is proposed and one additional condition is imposed on the inexact line search. The convergence of the resulting algorithm can be established under standard conditions for a twice continuously differentiable function with a bounded level set. Algorithms based on these modifications have been tested on a number of problems, showing considerable improvements. Comparisons with the BFGS and other quasi-Newton methods are also given.  相似文献   

14.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

15.
本文利用一个修正的BFGS公式,提出了一个结合Armijo线搜索条件技术的BFGS信赖域方法,并在一定条件下证明了该方法的全局收敛性和超线性收敛性.初步的数值实验结果表明该方法是有效的.  相似文献   

16.
To the unconstrained programme of non-convex function, this article give a modified BFGS algorithm. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the quasi-Newton iteration pattern. We prove the global convergence properties of the algorithm associating with the general form of line search, and prove the quadratic convergence rate of the algorithm under some conditions.  相似文献   

17.
A family of variable metric proximal methods   总被引:5,自引:0,他引:5  
We consider conceptual optimization methods combining two ideas: the Moreau—Yosida regularization in convex analysis, and quasi-Newton approximations of smooth functions. We outline several approaches based on this combination, and establish their global convergence. Then we study theoretically the local convergence properties of one of these approaches, which uses quasi-Newton updates of the objective function itself. Also, we obtain a globally and superlinearly convergent BFGS proximal method. At each step of our study, we single out the assumptions that are useful to derive the result concerned.  相似文献   

18.
We consider sequential quadratic programming methods for solving constrained nonlinear programming problems. It is generally believed that these methods are sensitive to the accuracy by which partial derivatives are provided. One reason is that differences of gradients of the Lagrangian function are used for updating a quasi-Newton matrix, e.g., by the BFGS formula. The purpose of this paper is to show by numerical experimentation that the method can be stabilized substantially. The algorithm applies non-monotone line search and internal and external restarts in case of errors due to inaccurate derivatives while computing the search direction. Even in case of large random errors leading to partial derivatives with at most one correct digit, termination subject to an accuracy of 10−7 can be achieved in 90% of 306 problems of a standard test suite. On the other hand, the original version with monotone line search and without restarts solves only 30% of these problems under the same test environment. In addition, we show how initial and periodic scaled restarts improve the efficiency in situations with slow convergence.  相似文献   

19.
In this paper, the calibration of the non linear Lotka–Volterra model is used to compare the robustness and efficiency (CPU time) of different optimisation algorithms.Five versions of a quasi-Newton trust-region algorithm are developed and compared with a widely used quasi-Newton method. The trust-region algorithms is more robust and three of them are numerically cheaper than the more usual line search approach.Computation of the first derivatives of the objective function is cheaper with the backward differentiation (or adjoint model) technique than with the forward method as soon as the number of parameter is greater than a few ones. In the optimisation problem, the additional information about the Jacobian matrix made available by the forward method reduces the number of iterations but does not compensate for the increased numerical costs.A quasi-Newton trust-region algorithm with backward differentiation and BFGS update after both successful and unsuccessful iterations represents a robust and efficient algorithm that can be used to calibrate very demanding dynamic models.  相似文献   

20.
This paper gives a simpler proof of the global convergence of Broyden class quasi-Newton method with inexact line search. This proof generalizes and modifies the proof of the global convergence of BFGS method by Nocedal and Wright in [3].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号