首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
In this paper, we take a little modification to the Wei–Yao–Liu nonlinear conjugate gradient method proposed by Wei et al. [Z. Wei, S. Yao, L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput. 183 (2006) 1341–1350] such that the modified method possesses better convergence properties. In fact, we prove that the modified method satisfies sufficient descent condition with greater parameter in the strong Wolfe line search and converges globally for nonconvex minimization. We also extend these results to the Hestenes–Stiefel method and prove that the modified HS method is globally convergent for nonconvex functions with the standard Wolfe conditions. Numerical results are reported by using some test problems in the CUTE library.  相似文献   

2.
Jiang  Xianzhen  Liao  Wei  Yin  Jianghua  Jian  Jinbao 《Numerical Algorithms》2022,91(1):161-191

In this paper, based on the hybrid conjugate gradient method and the convex combination technique, a new family of hybrid three-term conjugate gradient methods are proposed for solving unconstrained optimization. The conjugate parameter in the search direction is a hybrid of Dai-Yuan conjugate parameter and any one. The search direction then is the sum of the negative gradient direction and a convex combination in relation to the last search direction and the gradient at the previous iteration. Without choosing any specific conjugate parameters, we show that the search direction generated by the family always possesses the descent property independent of line search technique, and that it is globally convergent under usual assumptions and the weak Wolfe line search. To verify the effectiveness of the presented family, we further design a specific conjugate parameter, and perform medium-large-scale numerical experiments for smooth unconstrained optimization and image restoration problems. The numerical results show the encouraging efficiency and applicability of the proposed methods even compared with the state-of-the-art methods.

  相似文献   

3.
It is well-known that the HS method and the PRP method may not converge for nonconvex optimization even with exact line search. Some globalization techniques have been proposed, for instance, the PRP+ globalization technique and the Grippo-Lucidi globalization technique for the PRP method. In this paper, we propose a new efficient globalization technique for general nonlinear conjugate gradient methods for nonconvex minimization. This new technique utilizes the information of the previous search direction sufficiently. Under suitable conditions, we prove that the nonlinear conjugate gradient methods with this new technique are globally convergent for nonconvex minimization if the line search satisfies Wolfe conditions or Armijo condition. Extensive numerical experiments are reported to show the efficiency of the proposed technique.  相似文献   

4.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

5.
Although the study of global convergence of the Polak–Ribière–Polyak (PRP), Hestenes–Stiefel (HS) and Liu–Storey (LS) conjugate gradient methods has made great progress, the convergence of these algorithms for general nonlinear functions is still erratic, not to mention under weak conditions on the objective function and weak line search rules. Besides, it is also interesting to investigate whether there exists a general method that converges under the standard Armijo line search for general nonconvex functions, since very few relevant results have been achieved. So in this paper, we present a new general form of conjugate gradient methods whose theoretical significance is attractive. With any formula β k  ≥ 0 and under weak conditions, the proposed method satisfies the sufficient descent condition independently of the line search used and the function convexity, and its global convergence can be achieved under the standard Wolfe line search or even under the standard Armijo line search. Based on this new method, convergence results on the PRP, HS, LS, Dai–Yuan–type (DY) and Conjugate–Descent–type (CD) methods are established. Preliminary numerical results show the efficiency of the proposed methods.  相似文献   

6.
Although the Liu–Storey (LS) nonlinear conjugate gradient method has a similar structure as the well-known Polak–Ribière–Polyak (PRP) and Hestenes–Stiefel (HS) methods, research about this method is very rare. In this paper, based on the memoryless BFGS quasi-Newton method, we propose a new LS type method, which converges globally for general functions with the Grippo–Lucidi line search. Moreover, we modify this new LS method such that the modified scheme is globally convergent for nonconvex minimization if the strong Wolfe line search is used. Numerical results are also reported.  相似文献   

7.
In this paper, by the use of Gram-Schmidt orthogonalization, we propose a class of modified conjugate gradient methods. The methods are modifications of the well-known conjugate gradient methods including the PRP, the HS, the FR and the DY methods. A common property of the modified methods is that the direction generated by any member of the class satisfies gkTdk=-||gk||2g_{k}^{T}d_k=-\|g_k\|^2. Moreover, if line search is exact, the modified method reduces to the standard conjugate gradient method accordingly. In particular, we study the modified YT and YT+ methods. Under suitable conditions, we prove the global convergence of these two methods. Extensive numerical experiments show that the proposed methods are efficient for the test problems from the CUTE library.  相似文献   

8.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

9.
New properties of a nonlinear conjugate gradient method   总被引:6,自引:0,他引:6  
Summary. This paper provides several new properties of the nonlinear conjugate gradient method in [5]. Firstly, the method is proved to have a certain self-adjusting property that is independent of the line search and the function convexity. Secondly, under mild assumptions on the objective function, the method is shown to be globally convergent with a variety of line searches. Thirdly, we find that instead of the negative gradient direction, the search direction defined by the nonlinear conjugate gradient method in [5] can be used to restart any optimization method while guaranteeing the global convergence of the method. Some numerical results are also presented. Received March 12, 1999 / Revised version received April 25, 2000 / Published online February 5, 2001  相似文献   

10.
** Email: zl606{at}tom.com*** Corresponding author. Email: weijunzhou{at}126.com**** Email: dhli{at}hnu.cn In this paper, we propose a modified Polak–Ribière–Polyak(PRP) conjugate gradient method. An attractive property of theproposed method is that the direction generated by the methodis always a descent direction for the objective function. Thisproperty is independent of the line search used. Moreover, ifexact line search is used, the method reduces to the ordinaryPRP method. Under appropriate conditions, we show that the modifiedPRP method with Armijo-type line search is globally convergent.We also present extensive preliminary numerical experimentsto show the efficiency of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号