首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 532 毫秒
1.
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.

  相似文献   


2.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.  相似文献   

3.
Jiang  Xianzhen  Liao  Wei  Yin  Jianghua  Jian  Jinbao 《Numerical Algorithms》2022,91(1):161-191

In this paper, based on the hybrid conjugate gradient method and the convex combination technique, a new family of hybrid three-term conjugate gradient methods are proposed for solving unconstrained optimization. The conjugate parameter in the search direction is a hybrid of Dai-Yuan conjugate parameter and any one. The search direction then is the sum of the negative gradient direction and a convex combination in relation to the last search direction and the gradient at the previous iteration. Without choosing any specific conjugate parameters, we show that the search direction generated by the family always possesses the descent property independent of line search technique, and that it is globally convergent under usual assumptions and the weak Wolfe line search. To verify the effectiveness of the presented family, we further design a specific conjugate parameter, and perform medium-large-scale numerical experiments for smooth unconstrained optimization and image restoration problems. The numerical results show the encouraging efficiency and applicability of the proposed methods even compared with the state-of-the-art methods.

  相似文献   

4.
Min Li 《Optimization Letters》2018,12(8):1911-1927
Based on the memoryless BFGS quasi-Newton method, a family of three-term nonlinear conjugate gradient methods are proposed. For any line search, the directions generated by the new methods are sufficient descent. Using some efficient techniques, global convergence results are established when the line search fulfills the Wolfe or the Armijo conditions. Moreover, the r-linear convergence rate of the methods are analyzed as well. Numerical comparisons show that the proposed methods are efficient for the unconstrained optimization problems in the CUTEr library.  相似文献   

5.
An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization   总被引:22,自引:0,他引:22  
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.  相似文献   

6.
This paper explores the convergence of nonlinear conjugate gradient methods with Goldstein line search without regular restarts. Under this line search, global convergence for a subsequence is given for the famous conjugate gradient methods, Fletcher-Reeves method. The same result can be obtained for Polak-Ribiére-Polyak method and others. *This work was partially supported by National Hitech Program (863,2002AA104540) and National Natural Science Foundation of China (No.60373060).  相似文献   

7.
共轭梯度法是最优化中最常用的方法之一,广泛地应用于求解大规模优化问题,其中参数β_k的不同选取可以构成不同的共轭梯度法.给出了一类含有三个参数的共轭梯度算法,这种算法能够在给定的条件下证明选定的β_k在每一步都能产生一个下降方向,同时在强Wolfe线搜索下,这种算法具有全局收敛性.  相似文献   

8.
张忠元  张立卫 《经济数学》2007,24(3):307-314
本文建立了一个共轭梯度方法全局收敛性的判别准则,基于这一准则证明了一类三参数共轭梯度法的全局收敛性及DY方法的一个变形的全局收敛性.  相似文献   

9.
The linear conjugate gradient method is an optimal method for convex quadratic minimization due to the Krylov subspace minimization property. The proposition of limited-memory BFGS method and Barzilai-Borwein gradient method, however, heavily restricted the use of conjugate gradient method for large-scale nonlinear optimization. This is, to the great extent, due to the requirement of a relatively exact line search at each iteration and the loss of conjugacy property of the search directions in various occasions. On the contrary, the limited-memory BFGS method and the Barzilai-Bowein gradient method share the so-called asymptotical one stepsize per line-search property, namely, the trial stepsize in the method will asymptotically be accepted by the line search when the iteration is close to the solution. This paper will focus on the analysis of the subspace minimization conjugate gradient method by Yuan and Stoer (1995). Specifically, if choosing the parameter in the method by combining the Barzilai-Borwein idea, we will be able to provide some efficient Barzilai-Borwein conjugate gradient (BBCG) methods. The initial numerical experiments show that one of the variants, BBCG3, is specially efficient among many others without line searches. This variant of the BBCG method might enjoy the asymptotical one stepsize per line-search property and become a strong candidate for large-scale nonlinear optimization.  相似文献   

10.
本文在文献[1]中提出了一类新共轭梯度法的基础上,给出求解无约束优化问题的两类新的非线性下降共轭梯度法,此两类方法在无任何线搜索下,能够保证在每次迭代中产生下降方向.对一般非凸函数,我们在Wolfe线搜索条件下证明了两类新方法的全局收敛性.  相似文献   

11.
It is well-known that the HS method and the PRP method may not converge for nonconvex optimization even with exact line search. Some globalization techniques have been proposed, for instance, the PRP+ globalization technique and the Grippo-Lucidi globalization technique for the PRP method. In this paper, we propose a new efficient globalization technique for general nonlinear conjugate gradient methods for nonconvex minimization. This new technique utilizes the information of the previous search direction sufficiently. Under suitable conditions, we prove that the nonlinear conjugate gradient methods with this new technique are globally convergent for nonconvex minimization if the line search satisfies Wolfe conditions or Armijo condition. Extensive numerical experiments are reported to show the efficiency of the proposed technique.  相似文献   

12.
Conjugate gradient methods have been widely used as schemes to solve large-scale unconstrained optimization problems. The search directions for the conventional methods are defined by using the gradient of the objective function. This paper proposes two nonlinear conjugate gradient methods which take into account mostly information about the objective function. We prove that they converge globally and numerically compare them with conventional methods. The results show that with slight modification to the direction, one of our methods performs as well as the best conventional method employing the Hestenes–Stiefel formula.  相似文献   

13.
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function under suitable conditions. Numerical results are reported. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

14.

Iterative substructuring methods, also known as Schur complement methods, form an important family of domain decomposition algorithms. They are preconditioned conjugate gradient methods where solvers on local subregions and a solver on a coarse mesh are used to construct the preconditioner. For conforming finite element approximations of , it is known that the number of conjugate gradient steps required to reduce the residual norm by a fixed factor is independent of the number of substructures, and that it grows only as the logarithm of the dimension of the local problem associated with an individual substructure. In this paper, the same result is established for similar iterative methods for low-order Nédélec finite elements, which approximate in two dimensions. Results of numerical experiments are also provided.

  相似文献   


15.
Numerical Algorithms - In this paper, we present a family of Perry conjugate gradient methods for solving large-scale systems of monotone nonlinear equations. The methods are developed by combining...  相似文献   

16.

This paper considers sufficient descent Riemannian conjugate gradient methods with line search algorithms. We propose two kinds of sufficient descent nonlinear conjugate gradient method and prove that these methods satisfy the sufficient descent condition on Riemannian manifolds. One is a hybrid method combining a Fletcher–Reeves-type method with a Polak–Ribière–Polyak-type method, and the other is a Hager–Zhang-type method, both of which are generalizations of those used in Euclidean space. Moreover, we prove that the hybrid method has a global convergence property under the strong Wolfe conditions and the Hager–Zhang-type method has the sufficient descent property regardless of whether a line search is used or not. Further, we review two kinds of line search algorithm on Riemannian manifolds and numerically compare our generalized methods by solving several Riemannian optimization problems. The results show that the performance of the proposed hybrid methods greatly depends on the type of line search used. Meanwhile, the Hager–Zhang-type method has the fast convergence property regardless of the type of line search used.

  相似文献   

17.
Minimizing two different upper bounds of the matrix which generates search directions of the nonlinear conjugate gradient method proposed by Dai and Liao, two modified conjugate gradient methods are proposed. Under proper conditions, it is briefly shown that the methods are globally convergent when the line search fulfills the strong Wolfe conditions. Numerical comparisons between the implementations of the proposed methods and the conjugate gradient methods proposed by Hager and Zhang, and Dai and Kou, are made on a set of unconstrained optimization test problems of the CUTEr collection. The results show the efficiency of the proposed methods in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

18.
Although the study of global convergence of the Polak–Ribière–Polyak (PRP), Hestenes–Stiefel (HS) and Liu–Storey (LS) conjugate gradient methods has made great progress, the convergence of these algorithms for general nonlinear functions is still erratic, not to mention under weak conditions on the objective function and weak line search rules. Besides, it is also interesting to investigate whether there exists a general method that converges under the standard Armijo line search for general nonconvex functions, since very few relevant results have been achieved. So in this paper, we present a new general form of conjugate gradient methods whose theoretical significance is attractive. With any formula β k  ≥ 0 and under weak conditions, the proposed method satisfies the sufficient descent condition independently of the line search used and the function convexity, and its global convergence can be achieved under the standard Wolfe line search or even under the standard Armijo line search. Based on this new method, convergence results on the PRP, HS, LS, Dai–Yuan–type (DY) and Conjugate–Descent–type (CD) methods are established. Preliminary numerical results show the efficiency of the proposed methods.  相似文献   

19.
Based on a singular value analysis on an extension of the Polak–Ribière–Polyak method, a nonlinear conjugate gradient method with the following two optimal features is proposed: the condition number of its search direction matrix is minimum and also, the distance of its search direction from the search direction of a descent nonlinear conjugate gradient method proposed by Zhang et al. is minimum. Under proper conditions, global convergence of the method can be achieved. To enhance e?ciency of the proposed method, Powell’s truncation of the conjugate gradient parameters is used. The method is computationally compared with the nonlinear conjugate gradient method proposed by Zhang et al. and a modified Polak–Ribière–Polyak method proposed by Yuan. Results of numerical comparisons show e?ciency of the proposed method in the sense of the Dolan–Moré performance profile.  相似文献   

20.
In this paper, based on a new class of conjugate gradient methods which are proposed by Rivaie, Dai and Omer et al. we propose a class of improved conjugate gradient methods for nonconvex unconstrained optimization. Different from the above methods, our methods possess the following properties: (i) the search direction always satisfies the sufficient descent condition independent of any line search; (ii) these approaches are globally convergent with the standard Wolfe line search or standard Armijo line search without any convexity assumption. Moreover, our numerical results also demonstrated the efficiencies of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号