首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In order to propose a scaled conjugate gradient method, the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martínez are hybridized following Andrei’s approach. Since the proposed method is designed based on a revised form of a modified secant equation suggested by Zhang et al., one of its interesting features is applying the available function values in addition to the gradient values. It is shown that, for the uniformly convex objective functions, search directions of the method fulfill the sufficient descent condition which leads to the global convergence. Numerical comparisons of the implementations of the method and an efficient scaled conjugate gradient method proposed by Andrei, made on a set of unconstrained optimization test problems of the CUTEr collection, show the efficiency of the proposed modified scaled conjugate gradient method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

2.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

3.
In this paper, a subspace three-term conjugate gradient method is proposed. The search directions in the method are generated by minimizing a quadratic approximation of the objective function on a subspace. And they satisfy the descent condition and Dai-Liao conjugacy condition. At each iteration, the subspace is spanned by the current negative gradient and the latest two search directions. Thereby, the dimension of the subspace should be 2 or 3. Under some appropriate assumptions, the global convergence result of the proposed method is established. Numerical experiments show the proposed method is competitive for a set of 80 unconstrained optimization test problems.  相似文献   

4.
A new adaptive scaled Broyden-Fletcher-Goldfarb-Shanno (BFGS) method for unconstrained optimization is presented. The third term in the standard BFGS update formula is scaled in order to reduce the large eigenvalues of the approximation to the Hessian of the minimizing function. Under the inexact Wolfe line search conditions, the global convergence of the adaptive scaled BFGS method is proved in very general conditions without assuming the convexity of the minimizing function. Using 80 unconstrained optimization test functions with a medium number of variables, the preliminary numerical experiments show that this variant of the scaled BFGS method is more efficient than the standard BFGS update or than some other scaled BFGS methods.  相似文献   

5.
The aim of this paper is to incorporate the preconditioned gradient path in a nonmonotone stabilization algorithm for unconstrained optimization. The global convergence and locally superlinear convergence are established for this class of algorithms. Finally, we report in details the numerical results which show the effectiveness of the proposed algorithm.  相似文献   

6.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

7.
In this paper, a three-term conjugate gradient algorithm is developed for solving large-scale unconstrained optimization problems. The search direction at each iteration of the algorithm is determined by rectifying the steepest descent direction with the difference between the current iterative points and that between the gradients. It is proved that such a direction satisfies the approximate secant condition as well as the conjugacy condition. The strategies of acceleration and restart are incorporated into designing the algorithm to improve its numerical performance. Global convergence of the proposed algorithm is established under two mild assumptions. By implementing the algorithm to solve 75 benchmark test problems available in the literature, the obtained results indicate that the algorithm developed in this paper outperforms the existent similar state-of-the-art algorithms.  相似文献   

8.
《Optimization》2012,61(12):2679-2691
In this article, we present an improved three-term conjugate gradient algorithm for large-scale unconstrained optimization. The search directions in the developed algorithm are proved to satisfy an approximate secant equation as well as the Dai-Liao’s conjugacy condition. With the standard Wolfe line search and the restart strategy, global convergence of the algorithm is established under mild conditions. By implementing the algorithm to solve 75 benchmark test problems with dimensions from 1000 to 10,000, the obtained numerical results indicate that the algorithm outperforms the state-of-the-art algorithms available in the literature. It costs less CPU time and smaller number of iterations in solving the large-scale unconstrained optimization.  相似文献   

9.
A three-term conjugate gradient algorithm for large-scale unconstrained optimization using subspace minimizing technique is presented. In this algorithm the search directions are computed by minimizing the quadratic approximation of the objective function in a subspace spanned by the vectors: ?g k+1, s k and y k . The search direction is considered as: d k+1 = ?g k+1 + a k s k + b k y k , where the scalars a k and b k are determined by minimization the affine quadratic approximate of the objective function. The step-lengths are determined by the Wolfe line search conditions. We prove that the search directions are descent and satisfy the Dai-Liao conjugacy condition. The suggested algorithm is of three-term conjugate gradient type, for which both the descent and the conjugacy conditions are guaranteed. It is shown that, for uniformly convex functions, the directions generated by the algorithm are bounded above, i.e. the algorithm is convergent. The numerical experiments, for a set of 750 unconstrained optimization test problems, show that this new algorithm substantially outperforms the known Hestenes and Stiefel, Dai and Liao, Dai and Yuan and Polak, Ribiére and Poliak conjugate gradient algorithms, as well as the limited memory quasi-Newton method L-BFGS and the discrete truncated-Newton method TN.  相似文献   

10.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

11.
An accelerated hybrid conjugate gradient algorithm represents the subject of this paper. The parameter β k is computed as a convex combination of bkHS\beta_k^{HS} (Hestenes and Stiefel, J Res Nat Bur Stand 49:409–436, 1952) and bkDY\beta_k^{DY} (Dai and Yuan, SIAM J Optim 10:177–182, 1999), i.e. bkC = (1-qk)bkHS + qk bkDY\beta_k^C =\left({1-\theta_k}\right)\beta_k^{HS} + \theta_k \beta_k^{DY}. The parameter θ k in the convex combinaztion is computed in such a way the direction corresponding to the conjugate gradient algorithm is the best direction we know, i.e. the Newton direction, while the pair (s k , y k ) satisfies the modified secant condition given by Li et al. (J Comput Appl Math 202:523–539, 2007) B k + 1 s k  = z k , where zk = yk +(hk / || sk ||2 )skz_k =y_k +\left({{\eta_k} / {\left\| {s_k} \right\|^2}} \right)s_k, hk = 2( fk -fk+1 )+( gk +gk+1 )Tsk\eta_k =2\left( {f_k -f_{k+1}} \right)+\left( {g_k +g_{k+1}} \right)^Ts_k, s k  = x k + 1 − x k and y k  = g k + 1 − g k . It is shown that both for uniformly convex functions and for general nonlinear functions the algorithm with strong Wolfe line search is globally convergent. The algorithm uses an acceleration scheme modifying the steplength α k for improving the reduction of the function values along the iterations. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms a variant of the hybrid conjugate gradient algorithm given by Andrei (Numer Algorithms 47:143–156, 2008), in which the pair (s k , y k ) satisfies the classical secant condition B k + 1 s k  = y k , as well as some other conjugate gradient algorithms including Hestenes-Stiefel, Dai-Yuan, Polack-Ribière-Polyak, Liu-Storey, hybrid Dai-Yuan, Gilbert-Nocedal etc. A set of 75 unconstrained optimization problems with 10 different dimensions is being used (Andrei, Adv Model Optim 10:147–161, 2008).  相似文献   

12.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

13.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

14.
Conjugate gradient methods are important for large-scale unconstrained optimization. This paper proposes an acceleration of these methods using a modification of steplength. The idea is to modify in a multiplicative manner the steplength αk, computed by Wolfe line search conditions, by means of a positive parameter ηk, in such a way to improve the behavior of the classical conjugate gradient algorithms. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with some conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that the accelerated computational scheme outperform the corresponding conjugate gradient algorithms.  相似文献   

15.
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.

  相似文献   


16.
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.  相似文献   

17.
A new family of conjugate gradient methods is proposed by minimizing the distance between two certain directions. It is a subfamily of Dai–Liao family, which consists of Hager–Zhang family and Dai–Kou method. The direction of the proposed method is an approximation to that of the memoryless Broyden–Fletcher–Goldfarb–Shanno method. With the suitable intervals of parameters, the direction of the proposed method possesses the sufficient descent property independent of the line search. Under mild assumptions, we analyze the global convergence of the method for strongly convex functions and general functions where the stepsize is obtained by the standard Wolfe rules. Numerical results indicate that the proposed method is a promising method which outperforms CGOPT and CG_DESCENT on a set of unconstrained optimization testing problems.  相似文献   

18.
This paper presents a nonmonotone supermemory gradient algorithm for unconstrained optimization problems. At each iteration, this proposed method sufficiently uses the previous multi-step iterative information and avoids the storage and computation of matrices associated with the Hessian of objective functions, thus it is suitable to solve large-scale optimization problems and can converge stably. Under some assumptions, the convergence properties of the proposed algorithm are analyzed. Numerical results are also reported to show the efficiency of this proposed method.  相似文献   

19.
Parallel preconditioned conjugate gradient algorithm on GPU   总被引:1,自引:0,他引:1  
We propose a parallel implementation of the Preconditioned Conjugate Gradient algorithm on a GPU platform. The preconditioning matrix is an approximate inverse derived from the SSOR preconditioner. Used through sparse matrix–vector multiplication, the proposed preconditioner is well suited for the massively parallel GPU architecture. As compared to CPU implementation of the conjugate gradient algorithm, our GPU preconditioned conjugate gradient implementation is up to 10 times faster (8 times faster at worst).  相似文献   

20.
Zheng  Xiuyun  Dong  Xiaoliang  Shi  Jiarong  Yang  Wei 《Numerical Algorithms》2020,84(2):603-608
Numerical Algorithms - In Dai and Wen (Numer. Algor. 69, 337–341 2015), some improvements have been presented in the proof of Theorem 2 and Theorem 4 in Andrei (Numer. Algor. 47,...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号