共查询到19条相似文献,搜索用时 281 毫秒
1.
2.
3.
Polak-Ribière-Polak (PRP)方法是经典共轭梯度法中数值表现较好的方法之一.结合Wolfe非精确线搜索准则对PRP公式进行改进,从而产生新的共轭参数,并基于新共轭参数设计新的谱参数,引入重启条件并构造新的重启方向,进而建立一个带重启步的谱共轭梯度算法.在常规假设及强Wolfe非精确线搜索步长准则下,... 相似文献
4.
共轭梯度法是求解大规模无约束优化问题最有效的方法之一.对HS共轭梯度法参数公式进行改进,得到了一个新公式,并以新公式建立一个算法框架.在不依赖于任何线搜索条件下,证明了由算法框架产生的迭代方向均满足充分下降条件,且在标准Wolfe线搜索条件下证明了算法的全局收敛性.最后,对新算法进行数值测试,结果表明所改进的方法是有效的. 相似文献
5.
提出一类求解无约束最优化问题的混合共轭梯度算法,新算法有机地结合了DY算法和HS算法的优点,并采用非单调线搜索技术在较弱条件下证明了算法的全局收敛性.数值实验表明新算法具有良好的计算效能. 相似文献
6.
一种混合的HS-DY共轭梯度法 总被引:22,自引:3,他引:19
本文在HS方法和DY方法的基础上,综合两者的优势,提出了一种求解无约束优化问题的新的混合共轭梯度法.在Wolfe线搜索下,不需给定下降条件,证明了算法的全局收敛性.数值试验表明,新算法较之HS方法和PR方法更加有效. 相似文献
7.
8.
对求解无约束规划的超记忆梯度算法中线搜索方向中的参数,给了一个假设条件,从而确定了它的一个新的取值范围,保证了搜索方向是目标函数的充分下降方向,由此提出了一类新的记忆梯度算法.在去掉迭代点列有界和Armijo步长搜索下,讨论了算法的全局收敛性,且给出了结合形如共轭梯度法FR,PR,HS的记忆梯度法的修正形式.数值实验表明,新算法比Armijo线搜索下的FR、PR、HS共轭梯度法和超记忆梯度法更稳定、更有效. 相似文献
9.
10.
一种改进的共轭梯度法及全局收敛性 总被引:1,自引:0,他引:1
本文在DY共轭梯度法的基础上对解决无约束最优化问题提出一种改进的共轭梯度法.该方法在Wolfe线搜索下能够保证充分下降性,并在目标函数可微的条件下,证明了算法的全局收敛性.大量数值试验表明,该方法是很有效的. 相似文献
11.
In this paper, a new steplength formula is proposed for unconstrained optimization,which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugate gradient methods with this formula is analyzed,and the corresponding results are as follows:(1) The DY method globally converges for a strongly convex LC~1 objective function;(2) The CD method, the FR method, the PRP method and the LS method globally converge for a general, not necessarily convex, LC~1 objective function. 相似文献
12.
Neculai Andrei 《Computational Optimization and Applications》2007,38(3):401-416
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation
of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG
by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction.
This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart
philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner
by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly
convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained
optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral
conjugate gradient SCG algorithm.
The author was awarded the Romanian Academy Grant 168/2003. 相似文献
13.
共轭梯度法是求解无约束最优化问题的有效方法.本文在βkDY的基础上对βk引入参数,提出了一类新共轭梯度法,并证明其在强Wolfe线性搜索条件下具有充分下降性和全局收敛性. 相似文献
14.
本文建立了一个共轭梯度方法全局收敛性的判别准则,基于这一准则证明了一类三参数共轭梯度法的全局收敛性及DY方法的一个变形的全局收敛性. 相似文献
15.
Ioannis E. Livieris Vassilis Tampakas Panagiotis Pintelas 《Numerical Algorithms》2018,79(4):1169-1185
In this work, we present a new hybrid conjugate gradient method based on the approach of the convex hybridization of the conjugate gradient update parameters of DY and HS+, adapting a quasi-Newton philosophy. The computation of the hybrization parameter is obtained by minimizing the distance between the hybrid conjugate gradient direction and the self-scaling memoryless BFGS direction. Furthermore, a significant property of our proposed method is that it ensures sufficient descent independent of the accuracy of the line search. The global convergence of the proposed method is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments on a set of unconstrained optimization test problems from the CUTEr collection indicate that our proposed method is preferable and in general superior to classic conjugate gradient methods in terms of efficiency and robustness. 相似文献
16.
Sindhu Narayanan & P. Kaelo 《高等学校计算数学学报(英文版)》2021,14(2):527-539
Conjugate gradient methods are interesting iterative methods that solve
large scale unconstrained optimization problems. A lot of recent research has thus
focussed on developing a number of conjugate gradient methods that are more effective. In this paper, we propose another hybrid conjugate gradient method as a linear
combination of Dai-Yuan (DY) method and the Hestenes-Stiefel (HS) method. The
sufficient descent condition and the global convergence of this method are established using the generalized Wolfe line search conditions. Compared to the other
conjugate gradient methods, the proposed method gives good numerical results and
is effective. 相似文献
17.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94). 相似文献
18.
19.
Sne?ana S.DJORDJEVI? 《数学物理学报(B辑英文版)》2019,(1)
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one. 相似文献