首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
刘金魁 《数学季刊》2014,(1):142-150
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP~+method.  相似文献   

2.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

3.
In this paper, by the use of the project of the PRP (Polak–Ribiére–Polyak) conjugate gradient direction, we develop a PRP-based descent method for solving unconstrained optimization problem. The method provides a sufficient descent direction for the objective function. Moreover, if exact line search is used, the method reduces to the standard PRP method. Under suitable conditions, we show that the method with some backtracking line search or the generalized Wolfe-type line search is globally convergent. We also report some numerical results and compare the performance of the method with some existing conjugate gradient methods. The results show that the proposed method is efficient.  相似文献   

4.
一个充分下降的有效共轭梯度法   总被引:2,自引:0,他引:2  
对于大规模无约束优化问题,本文提出了一个充分下降的共轭梯度法公式,并建立相应的算法.该算法在不依赖于任何线搜索条件下,每步迭代都能产生一个充分下降方向.若采用标准Wolfe非精确线搜索求步长,则在常规假设条件下可获得算法良好的全局收敛性最后,对算法进行大规模数值试验,并采用Dolan和More的性能图对试验效果进行刻画,结果表明该算法是有效的.  相似文献   

5.
共轭梯度法是求解大规模无约束优化问题最有效的方法之一.对HS共轭梯度法参数公式进行改进,得到了一个新公式,并以新公式建立一个算法框架.在不依赖于任何线搜索条件下,证明了由算法框架产生的迭代方向均满足充分下降条件,且在标准Wolfe线搜索条件下证明了算法的全局收敛性.最后,对新算法进行数值测试,结果表明所改进的方法是有效的.  相似文献   

6.
An efficient descent method for unconstrained optimization problems is line search method in which the step size is required to choose at each iteration after a descent direction is determined. There are many ways to choose the step sizes, such as the exact line search, Armijo line search, Goldstein line search, and Wolfe line search, etc. In this paper we propose a new inexact line search for a general descent method and establish some global convergence properties. This new line search has many advantages comparing with other similar inexact line searches. Moreover, we analyze the global convergence and local convergence rate of some special descent methods with the new line search. Preliminary numerical results show that the new line search is available and efficient in practical computation.  相似文献   

7.
一类非单调修正PRP算法的全局收敛性   总被引:1,自引:0,他引:1  
易芳 《经济数学》2006,23(1):99-103
本文给出一类非单调线性搜索下的修正PRP算法,该方法保证每次迭代中的搜索方向是充分下降的.在较弱的条件下,我们证明了此类非单调修正PRP算法具有全局收敛性.  相似文献   

8.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
Although the study of global convergence of the Polak–Ribière–Polyak (PRP), Hestenes–Stiefel (HS) and Liu–Storey (LS) conjugate gradient methods has made great progress, the convergence of these algorithms for general nonlinear functions is still erratic, not to mention under weak conditions on the objective function and weak line search rules. Besides, it is also interesting to investigate whether there exists a general method that converges under the standard Armijo line search for general nonconvex functions, since very few relevant results have been achieved. So in this paper, we present a new general form of conjugate gradient methods whose theoretical significance is attractive. With any formula β k  ≥ 0 and under weak conditions, the proposed method satisfies the sufficient descent condition independently of the line search used and the function convexity, and its global convergence can be achieved under the standard Wolfe line search or even under the standard Armijo line search. Based on this new method, convergence results on the PRP, HS, LS, Dai–Yuan–type (DY) and Conjugate–Descent–type (CD) methods are established. Preliminary numerical results show the efficiency of the proposed methods.  相似文献   

10.
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.  相似文献   

11.
共轭下降法的全局收敛性   总被引:3,自引:0,他引:3  
本文提出了一种Armijo型的线搜索,并在这种线搜索下讨论了共轭下降法的全局收敛性,且可得方法在每次迭代均产生一个下降搜索方向.  相似文献   

12.
强Wolfe条件不能保证标准CD共轭梯度法全局收敛.本文通过建立新的共轭参数,提出无约束优化问题的一个新谱共轭梯度法,该方法在精确线搜索下与标准CD共轭梯度法等价,在标准wolfe线搜索下具有下降性和全局收敛性.初步的数值实验结果表明新方法是有效的,适合于求解非线性无约束优化问题.  相似文献   

13.
A new conjugate gradient method is proposed in this paper. For any (inexact) line search, our scheme satifies the sufficient descent property. The method is proved to be globally convergent if the restricted Wolfe-Powell line search is used. Preliminary numerical result shows that it is efficient.  相似文献   

14.
本文给出了一类线性约束下不可微量优化问题的可行下降方法,这类问题的目标函数是凸函数和可微函数的合成函数,算法通过解系列二次规划寻找可行下降方向,新的迭代点由不精确线搜索产生,在较弱的条件下,我们证明了算法的全局收敛性  相似文献   

15.
一种修正的谱CD共轭梯度算法的全局收敛性   总被引:2,自引:0,他引:2  
In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising.  相似文献   

16.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

17.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

18.

This paper considers sufficient descent Riemannian conjugate gradient methods with line search algorithms. We propose two kinds of sufficient descent nonlinear conjugate gradient method and prove that these methods satisfy the sufficient descent condition on Riemannian manifolds. One is a hybrid method combining a Fletcher–Reeves-type method with a Polak–Ribière–Polyak-type method, and the other is a Hager–Zhang-type method, both of which are generalizations of those used in Euclidean space. Moreover, we prove that the hybrid method has a global convergence property under the strong Wolfe conditions and the Hager–Zhang-type method has the sufficient descent property regardless of whether a line search is used or not. Further, we review two kinds of line search algorithm on Riemannian manifolds and numerically compare our generalized methods by solving several Riemannian optimization problems. The results show that the performance of the proposed hybrid methods greatly depends on the type of line search used. Meanwhile, the Hager–Zhang-type method has the fast convergence property regardless of the type of line search used.

  相似文献   

19.
共轭下降法的全局收敛性   总被引:22,自引:1,他引:21  
袁亚湘 《数学进展》1996,25(6):552-562
共轭下降法最早由Fletcher提出,本文证明了一类非精确线搜索条件能保证共轭下的降法的收敛性,并且构造了反例表明,如果线搜索条件放松,则共轭下降法可能不收敛,此外,我们还得到了与Flecher-Reeves方法有关的一类方法的结论。  相似文献   

20.
提出了一种三项超记忆梯度方法.该方法的最大优点是:在无需线性搜索的条件下,迭代方向就是充分下降方向.在较弱的条件下,分析了方法的全局收敛性.初步数值试验表明方法是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号