首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new diagonal quasi-Newton updating algorithm for unconstrained optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaled forward finite differences directional derivatives of the components of the gradient. Under mild classical assumptions, the convergence of the algorithm is proved to be linear. Numerical experiments with 80 unconstrained optimization test problems, of different structures and complexities, as well as five applications from MINPACK-2 collection, prove that the suggested algorithm is more efficient and more robust than the quasi-Newton diagonal algorithm retaining only the diagonal elements of the BFGS update, than the weak quasi-Newton diagonal algorithm, than the quasi-Cauchy diagonal algorithm, than the diagonal approximation of the Hessian by the least-change secant updating strategy and minimizing the trace of the matrix, than the Cauchy with Oren and Luenberger scaling algorithm in its complementary form (i.e. the Barzilai-Borwein algorithm), than the steepest descent algorithm, and than the classical BFGS algorithm. However, our algorithm is inferior to the limited memory BFGS algorithm (L-BFGS).  相似文献   

2.
In this paper, an adaptive trust region algorithm that uses Moreau–Yosida regularization is proposed for solving nonsmooth unconstrained optimization problems. The proposed algorithm combines a modified secant equation with the BFGS update formula and an adaptive trust region radius, and the new trust region radius utilizes not only the function information but also the gradient information. The global convergence and the local superlinear convergence of the proposed algorithm are proven under suitable conditions. Finally, the preliminary results from comparing the proposed algorithm with some existing algorithms using numerical experiments reveal that the proposed algorithm is quite promising for solving nonsmooth unconstrained optimization problems.  相似文献   

3.
In this paper, we scale the quasiNewton equation and propose a spectral scaling BFGS method. The method has a good selfcorrecting property and can improve the behavior of the BFGS method. Compared with the standard BFGS method, the single-step convergence rate of the spectral scaling BFGS method will not be inferior to that of the steepest descent method when minimizing an n-dimensional quadratic function. In addition, when the method with exact line search is applied to minimize an n-dimensional strictly convex function, it terminates within n steps. Under appropriate conditions, we show that the spectral scaling BFGS method with Wolfe line search is globally and R-linear convergent for uniformly convex optimization problems. The reported numerical results show that the spectral scaling BFGS method outperforms the standard BFGS method.  相似文献   

4.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

5.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   

6.
In this work, we present a new hybrid conjugate gradient method based on the approach of the convex hybridization of the conjugate gradient update parameters of DY and HS+, adapting a quasi-Newton philosophy. The computation of the hybrization parameter is obtained by minimizing the distance between the hybrid conjugate gradient direction and the self-scaling memoryless BFGS direction. Furthermore, a significant property of our proposed method is that it ensures sufficient descent independent of the accuracy of the line search. The global convergence of the proposed method is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments on a set of unconstrained optimization test problems from the CUTEr collection indicate that our proposed method is preferable and in general superior to classic conjugate gradient methods in terms of efficiency and robustness.  相似文献   

7.
In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified updates are evaluated and used in the switching rule. When the condition number of the modified SR1 update is superior to the modified BFGS update, the step in the proposed quasi-Newton method is the modified SR1 step. Otherwise the step is the modified BFGS step. The efficiency of the proposed method is tested by numerical experiments on small, medium and large scale optimization. The numerical results are reported and analyzed to show the superiority of the proposed method.  相似文献   

8.
This paper is aimed to extend a certain damped technique, suitable for the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method, to the limited memory BFGS method in the case of the large-scale unconstrained optimization. It is shown that the proposed technique maintains the global convergence property on uniformly convex functions for the limited memory BFGS method. Some numerical results are described to illustrate the important role of the damped technique. Since this technique enforces safely the positive definiteness property of the BFGS update for any value of the steplength, we also consider only the first Wolfe–Powell condition on the steplength. Then, as for the backtracking framework, only one gradient evaluation is performed on each iteration. It is reported that the proposed damped methods work much better than the limited memory BFGS method in several cases.  相似文献   

9.
It is well known that trust region methods are very effective for optimization problems. In this article, a new adaptive trust region method is presented for solving unconstrained optimization problems. The proposed method combines a modified secant equation with the BFGS updated formula and an adaptive trust region radius, where the new trust region radius makes use of not only the function information but also the gradient information. Under suitable conditions, global convergence is proved, and we demonstrate the local superlinear convergence of the proposed method. The numerical results indicate that the proposed method is very efficient.  相似文献   

10.
We study the use of the BFGS and DFP algorithms with step-lengths of one for minimizing quadratic functions of only two variables. The updating formulae in this case imply nonlinear three term recurrence relations between the eigenvalues of consecutive second derivative approximations, which are analysed in order to explain some gross inefficiencies that can occur. Specifically, the BFGS algorithm may require more than 10 iterations to achieve the first decimal place of accuracy, while the performance of the DFP method is far worse. The results help to explain why the DFP method is often less suitable than the BFGS algorithm for general unconstrained optimization calculations, and they show that quadratic functions provide much information about efficiency when the current vector of variables is too far from the solution for an asymptotic convergence analysis.  相似文献   

11.
Based on simple quadratic models of the trust region subproblem, we combine the trust region method with the nonmonotone and adaptive techniques to propose a new nonmonotone adaptive trust region algorithm for unconstrained optimization. Unlike traditional trust region method, our trust region subproblem is very simple by using a new scale approximation of the minimizing function??s Hessian. The new method needs less memory capacitance and computational complexity. The convergence results of the method are proved under certain conditions. Numerical results show that the new method is effective and attractive for large scale unconstrained problems.  相似文献   

12.
Numerical Algorithms - A class of two–parameter scaled memoryless BFGS methods is developed for solving unconstrained optimization problems. Then, the scaling parameters are determined in a...  相似文献   

13.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

14.
In order to propose a scaled conjugate gradient method, the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez are hybridized following Andrei’s approach. Since the proposed method is designed based on a revised form of a modified secant equation suggested by Zhang et al., one of its interesting features is applying the available function values in addition to the gradient values. It is shown that, for the uniformly convex objective functions, search directions of the method fulfill the sufficient descent condition which leads to the global convergence. Numerical comparisons of the implementations of the method and an efficient scaled conjugate gradient method proposed by Andrei, made on a set of unconstrained optimization test problems of the CUTEr collection, show the efficiency of the proposed modified scaled conjugate gradient method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

15.
The application of quasi-Newton methods is widespread in numerical optimization. Independently of the application, the techniques used to update the BFGS matrices seem to play an important role in the performance of the overall method. In this paper, we address precisely this issue. We compare two implementations of the limited memory BFGS method for large-scale unconstrained problems. They differ in the updating technique and the choice of initial matrix. L-BFGS performs continuous updating, whereas SNOPT uses a restarted limited memory strategy. Our study shows that continuous updating techniques are more effective, particularly for large problems.  相似文献   

16.
景书杰  于俊霞 《数学杂志》2015,35(1):131-134
本文对于无约束最优化问题提出了一个新的BFGS信赖域算法.利用BFGS方法和信赖域方法,提出了改进的BFGS信赖域方法.推广了文献[3,5]中的两种算法,得到一个新的BFGS信赖域算法,在适当条件下证明了算法的全局收敛性.  相似文献   

17.
顾桂定  王德人 《计算数学》1999,21(4):417-428
1.引言实际问题中经常要遇到一族函数极小值问题的求解,即minfi(x),i=1,...,P;(1.1)其中人:R"、R具有公共的Hessian矩阵G(x)。7'fi(x),r是适中的数值.如在各种负载下的弹性体研究中,即要遇到问题(l.I)的求解,其中人(C)一人C)+qC十C;(=1,...,....对于不同的比则人(X)具有不同的极小点和不同的梯度D人(X),但具有相同的Hessian矩阵G(X).1994年,O'Leary等【']把拟一Newton算法推广至成组形式(multiPleversio...,…  相似文献   

18.
本文对凸函数在极值点的Hessian矩阵是秩亏一的情况下,给出了一类求解无约束优化问题的修正BFGS算法.算法的思想是对凸函数加上一个修正项,得到一个等价的模型,然后简化此模型得到一个修正的BFGS算法.文中证明了该算法是一个具有超线性收敛的算法,并且把修正的BFGS算法同Tensor方法进行了数值比较,证明了该算法对求解秩亏一的无约束优化问题更有效.  相似文献   

19.
This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is developed. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-Newton method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from a positive definite matrix. Therefore we propose the memoryless SR1 method, which is updated from a positive scaled of the identity, where the scaling factor is derived in such a way that positive definiteness of the updating matrices are preserved and at the same time improves the condition of the scaled memoryless SR1 update. Under very mild conditions it is shown that, for strictly convex objective functions, the method is globally convergent with a linear rate of convergence. Numerical results show that the optimally scaled memoryless SR1 method is very encouraging.  相似文献   

20.
Minimizing the distance between search direction matrix of the Dai–Liao method and the scaled memoryless BFGS update in the Frobenius norm, and using Powell’s nonnegative restriction of the conjugate gradient parameters, a one-parameter class of nonlinear conjugate gradient methods is proposed. Then, a brief global convergence analysis is made with and without convexity assumption on the objective function. Preliminary numerical results are reported; they demonstrate a proper choice for the parameter of the proposed class of conjugate gradient methods may lead to promising numerical performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号