首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Jiang  Xianzhen  Liao  Wei  Yin  Jianghua  Jian  Jinbao 《Numerical Algorithms》2022,91(1):161-191

In this paper, based on the hybrid conjugate gradient method and the convex combination technique, a new family of hybrid three-term conjugate gradient methods are proposed for solving unconstrained optimization. The conjugate parameter in the search direction is a hybrid of Dai-Yuan conjugate parameter and any one. The search direction then is the sum of the negative gradient direction and a convex combination in relation to the last search direction and the gradient at the previous iteration. Without choosing any specific conjugate parameters, we show that the search direction generated by the family always possesses the descent property independent of line search technique, and that it is globally convergent under usual assumptions and the weak Wolfe line search. To verify the effectiveness of the presented family, we further design a specific conjugate parameter, and perform medium-large-scale numerical experiments for smooth unconstrained optimization and image restoration problems. The numerical results show the encouraging efficiency and applicability of the proposed methods even compared with the state-of-the-art methods.

  相似文献   

2.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

3.

This paper considers sufficient descent Riemannian conjugate gradient methods with line search algorithms. We propose two kinds of sufficient descent nonlinear conjugate gradient method and prove that these methods satisfy the sufficient descent condition on Riemannian manifolds. One is a hybrid method combining a Fletcher–Reeves-type method with a Polak–Ribière–Polyak-type method, and the other is a Hager–Zhang-type method, both of which are generalizations of those used in Euclidean space. Moreover, we prove that the hybrid method has a global convergence property under the strong Wolfe conditions and the Hager–Zhang-type method has the sufficient descent property regardless of whether a line search is used or not. Further, we review two kinds of line search algorithm on Riemannian manifolds and numerically compare our generalized methods by solving several Riemannian optimization problems. The results show that the performance of the proposed hybrid methods greatly depends on the type of line search used. Meanwhile, the Hager–Zhang-type method has the fast convergence property regardless of the type of line search used.

  相似文献   

4.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

5.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

6.
Based on two modified secant equations proposed by Yuan, and Li and Fukushima, we extend the approach proposed by Andrei, and introduce two hybrid conjugate gradient methods for unconstrained optimization problems. Our methods are hybridizations of Hestenes-Stiefel and Dai-Yuan conjugate gradient methods. Under proper conditions, we show that one of the proposed algorithms is globally convergent for uniformly convex functions and the other is globally convergent for general functions. To enhance the performance of the line search procedure, we propose a new approach for computing the initial value of the steplength for initiating the line search procedure. We give a comparison of the implementations of our algorithms with two efficiently representative hybrid conjugate gradient methods proposed by Andrei using unconstrained optimization test problems from the CUTEr collection. Numerical results show that, in the sense of the performance profile introduced by Dolan and Moré, the proposed hybrid algorithms are competitive, and in some cases more efficient.  相似文献   

7.
本文在文献[1]中提出了一类新共轭梯度法的基础上,给出求解无约束优化问题的两类新的非线性下降共轭梯度法,此两类方法在无任何线搜索下,能够保证在每次迭代中产生下降方向.对一般非凸函数,我们在Wolfe线搜索条件下证明了两类新方法的全局收敛性.  相似文献   

8.
In this paper, based on a new class of conjugate gradient methods which are proposed by Rivaie, Dai and Omer et al. we propose a class of improved conjugate gradient methods for nonconvex unconstrained optimization. Different from the above methods, our methods possess the following properties: (i) the search direction always satisfies the sufficient descent condition independent of any line search; (ii) these approaches are globally convergent with the standard Wolfe line search or standard Armijo line search without any convexity assumption. Moreover, our numerical results also demonstrated the efficiencies of the proposed methods.  相似文献   

9.
In this paper, by the use of the project of the PRP (Polak–Ribiére–Polyak) conjugate gradient direction, we develop a PRP-based descent method for solving unconstrained optimization problem. The method provides a sufficient descent direction for the objective function. Moreover, if exact line search is used, the method reduces to the standard PRP method. Under suitable conditions, we show that the method with some backtracking line search or the generalized Wolfe-type line search is globally convergent. We also report some numerical results and compare the performance of the method with some existing conjugate gradient methods. The results show that the proposed method is efficient.  相似文献   

10.
In this paper, we propose two new hybrid nonlinear conjugate gradient methods, which produce sufficient descent search direction at every iteration. This property depends neither on the line search used nor on the convexity of the objective function. Under suitable conditions, we prove that the proposed methods converge globally for general nonconvex functions. The numerical results show that both hybrid methods are efficient for the given test problems from the CUTE library.  相似文献   

11.
本文在几种常见的Armijo型线搜索基础上,提出了一种新的Armijo型线搜索条件,并证明了由Du等人提出的杂交共轭梯度法的全局收敛性。数值实验表明新方法对于给定的测试函数是有效的。  相似文献   

12.
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. It is well-known that the direction generated by a conjugate gradient method may not be a descent direction of the objective function. In this paper, we take a little modification to the Fletcher–Reeves (FR) method such that the direction generated by the modified method provides a descent direction for the objective function. This property depends neither on the line search used, nor on the convexity of the objective function. Moreover, the modified method reduces to the standard FR method if line search is exact. Under mild conditions, we prove that the modified method with Armijo-type line search is globally convergent even if the objective function is nonconvex. We also present some numerical results to show the efficiency of the proposed method.Supported by the 973 project (2004CB719402) and the NSF foundation (10471036) of China.  相似文献   

13.
In this paper, we propose a three-term conjugate gradient method via the symmetric rank-one update. The basic idea is to exploit the good properties of the SR1 update in providing quality Hessian approximations to construct a conjugate gradient line search direction without the storage of matrices and possess the sufficient descent property. Numerical experiments on a set of standard unconstrained optimization problems showed that the proposed method is superior to many well-known conjugate gradient methods in terms of efficiency and robustness.  相似文献   

14.
In this article, by slightly modifying the search direction of the nonmonotone Hestenes–Stiefel method, a variant Hestenes–Stiefel conjugate gradient method is proposed that satisfies the su?cient descent condition independent of any line search. This algorithm also possesses information about the gradient value and the function value. We establish the global convergence of our methods without the assumption that the steplength is bounded away from zero. Numerical results illustrate that our method can e?ciently solve the test problems, and therefore is promising.  相似文献   

15.
提出了一类新的非单调谱共轭梯度方法.该方法通过引入混合因子,将HS方法和PRP方法结合得到共轭系数的新的选取方式.以此为基础,通过合适地选取谱系数保证了所有搜索方向不依赖于线搜索条件,恒为充分下降方向.其次,该方法还修正了Zhang和Hager提出的非单调线搜索规则,在更弱的假设条件下证明了全局收敛性.数值试验说明了该方法的计算性能优良.  相似文献   

16.
Conjugate gradient methods are interesting iterative methods that solve large scale unconstrained optimization problems. A lot of recent research has thus focussed on developing a number of conjugate gradient methods that are more effective. In this paper, we propose another hybrid conjugate gradient method as a linear combination of Dai-Yuan (DY) method and the Hestenes-Stiefel (HS) method. The sufficient descent condition and the global convergence of this method are established using the generalized Wolfe line search conditions. Compared to the other conjugate gradient methods, the proposed method gives good numerical results and is effective.  相似文献   

17.
本文对求解无约束优化问题提出一类三项混合共轭梯度算法,新算法将Hestenes- stiefel算法与Dai-Yuan方法相结合,并在不需给定下降条件的情况下,证明了算法在Wolfe线搜索原则下的收敛性,数值试验亦显示出这种混合共轭梯度算法较之HS和PRP的优势.  相似文献   

18.
In this paper, we suggest another accelerated conjugate gradient algorithm for which both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of the gradient and the previous direction. The coefficients in this linear combination are selected in such a way that both the descent and the conjugacy condition are satisfied at every iteration. The algorithm introduces the modified Wolfe line search, in which the parameter in the second Wolfe condition is modified at every iteration. It is shown that both for uniformly convex functions and for general nonlinear functions, the algorithm with strong Wolfe line search generates directions bounded away from infinity. The algorithm uses an acceleration scheme modifying the step length in such a manner as to improve the reduction of the function values along the iterations. Numerical comparisons with some conjugate gradient algorithms using a set of 75 unconstrained optimization problems with different dimensions show that the computational scheme outperforms the known conjugate gradient algorithms like Hestenes and Stiefel; Polak, Ribière and Polyak; Dai and Yuan or the hybrid Dai and Yuan; CG_DESCENT with Wolfe line search, as well as the quasi-Newton L-BFGS.  相似文献   

19.
Based on a singular value analysis on an extension of the Polak–Ribière–Polyak method, a nonlinear conjugate gradient method with the following two optimal features is proposed: the condition number of its search direction matrix is minimum and also, the distance of its search direction from the search direction of a descent nonlinear conjugate gradient method proposed by Zhang et al. is minimum. Under proper conditions, global convergence of the method can be achieved. To enhance e?ciency of the proposed method, Powell’s truncation of the conjugate gradient parameters is used. The method is computationally compared with the nonlinear conjugate gradient method proposed by Zhang et al. and a modified Polak–Ribière–Polyak method proposed by Yuan. Results of numerical comparisons show e?ciency of the proposed method in the sense of the Dolan–Moré performance profile.  相似文献   

20.
王开荣  刘奔 《计算数学》2012,34(1):81-92
共轭梯度法是一类非常重要的用于解决大规模无约束优化问题的方法. 本文通过修正的BFGS公式提出了一个新的共轭梯度方法. 该方法具有不依赖于线搜索的充分下降性. 对于一般的非线性函数, 证明了该方法的全局收敛性. 数值结果表明该方法是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号