首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
其中g_k=f(x_k),β_k为参数.β_k的不同选法形成了各种共轭梯度法,其中Fletcher-Reeves法(简记为FR法)是理论较完整的一个方法,对水平集有界的二阶连续可微函数,Powell和Baali分别在精确和不精确线搜索下证明了其全局收敛性.Polak-Ribiere法  相似文献   

2.
一种修正的HS共轭梯度法及全局收敛性   总被引:2,自引:0,他引:2  
<正>1引言考虑无约束极小化问题:(?),(1)其中f(x)连续可微,其梯度函数用g(x)表示.共轭梯度法求解(1)的常用迭代格式为:x_(k+1)=x_k+α_kd_k,(2)(?)(3)其中g_k=▽f(x_k),α_k≥0是由某种线搜索得到的步长因子;d_k为搜索方向,β_k为标量,β_k的不同选择产生了不同的共轭梯度法.著名的β_k公式有:  相似文献   

3.
限制PR共轭梯度法及其全局收敛性   总被引:5,自引:0,他引:5  
时贞军 《数学进展》2002,31(1):47-55
PR共轭梯度法是求解大型无约束优化问题的有效算法之一,但是算法的全局收敛性在理论上一直没有得到解决。本文将PR共轭梯度法中的参数β加以限制,提出了限制R共轭梯度法,证明了Armijo搜索下算法的全局收敛性、数值试验表明算法是很有效的。  相似文献   

4.
王开荣  吴伟霞 《经济数学》2007,24(4):431-436
共轭梯度法是求解无约束最优化问题的有效方法.本文在βkDY的基础上对βk引入参数,提出了一类新共轭梯度法,并证明其在强Wolfe线性搜索条件下具有充分下降性和全局收敛性.  相似文献   

5.
本文提出了一种新的求解无约束优化问题的混合共轭梯度算法.通过构造新的β_k公式,并由此提出一个不同于传统方式的确定搜索方向的方法,使得新算法不但能自然满足下降性条件,而且这个性质与线性搜索和目标函数的凸性均无关.在较弱的条件下,我们证明了新算法的全局收敛性.数值结果亦表明了该算法的有效性.  相似文献   

6.
通过求解带有罚参数的优化问题设计共轭梯度法是一种新思路.基于Fatemi的优化问题求解,通过估计步长和选择合适的罚参数建立一个谱三项共轭梯度法,为证得算法的全局收敛性对谱参数进行修正.在标准Wolfe线搜索下证明了该谱三项共轭梯度算法的充分下降性以及全局收敛性.最后,在选取相同算例的多个算法测试结果中表明新方法数值试验性能表现良好.  相似文献   

7.
针对无约束优化问题,通过修正共轭梯度参数,构造新的搜索方向,提出两类修正的WYL共轭梯度法.在每次迭代过程中,两类算法产生的搜索方向均满足充分下降性.在适当条件下,证明了算法的全局收敛性.数值结果表明算法是可行的和有效的.  相似文献   

8.
本文提出一个求解非线性不等式约束优化问题的带有共轭梯度参数的广义梯度投影算法.算法中的共轭梯度参数是很容易得到的,且算法的初始点可以任意选取.而且,由于算法仅使用前一步搜索方向的信息,因而减少了计算量.在较弱条件下得到了算法的全局收敛性.数值结果表明算法是有效的.  相似文献   

9.
文献[3]中的改进的Wolfe线搜索算法,对于计算目标函数梯度花费较大的情形可以节省一定的计算量.本文将这种改进的Wolfe线搜索算法用于FR共轭梯度法,并证明了该算法在参数σ≤1/2的情况下与Wolfe线搜索下的FR共轭梯度法具有相同的理论性质.数值实验表明该算法是可行的和有效的.  相似文献   

10.
收敛共轭梯度方法参数βk的条件   总被引:2,自引:0,他引:2  
本文给出确定共轭梯度方法中参数βk范围的两个条件-条件Ⅰ和条件Ⅱ,它们都确保方法的全局收敛.在条件Ⅰ和Gilbert&Nocedal(1992)引入的性质(*)下及在条件Ⅱ和Wolfe条件下,分别建立了共轭梯度算法的收敛性定理.  相似文献   

11.
共轭梯度法是求解大规模无约束优化问题最有效的方法之一.对HS共轭梯度法参数公式进行改进,得到了一个新公式,并以新公式建立一个算法框架.在不依赖于任何线搜索条件下,证明了由算法框架产生的迭代方向均满足充分下降条件,且在标准Wolfe线搜索条件下证明了算法的全局收敛性.最后,对新算法进行数值测试,结果表明所改进的方法是有效的.  相似文献   

12.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

13.
An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization   总被引:22,自引:0,他引:22  
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.  相似文献   

14.
Jiang  Xianzhen  Liao  Wei  Yin  Jianghua  Jian  Jinbao 《Numerical Algorithms》2022,91(1):161-191

In this paper, based on the hybrid conjugate gradient method and the convex combination technique, a new family of hybrid three-term conjugate gradient methods are proposed for solving unconstrained optimization. The conjugate parameter in the search direction is a hybrid of Dai-Yuan conjugate parameter and any one. The search direction then is the sum of the negative gradient direction and a convex combination in relation to the last search direction and the gradient at the previous iteration. Without choosing any specific conjugate parameters, we show that the search direction generated by the family always possesses the descent property independent of line search technique, and that it is globally convergent under usual assumptions and the weak Wolfe line search. To verify the effectiveness of the presented family, we further design a specific conjugate parameter, and perform medium-large-scale numerical experiments for smooth unconstrained optimization and image restoration problems. The numerical results show the encouraging efficiency and applicability of the proposed methods even compared with the state-of-the-art methods.

  相似文献   

15.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

16.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.  相似文献   

17.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.  相似文献   

18.
A class of globally convergent conjugate gradient methods   总被引:4,自引:0,他引:4  
Conjugate gradient methods are very important ones for solving nonlinear optimization problems, especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods were usually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can be regarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed by Dai et al. To analyze this class of methods, we introduce some unified tools that concern a general method with the scalar βk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods can uniformly be analyzed.  相似文献   

19.
提出一类求解无约束最优化问题的混合共轭梯度算法,新算法有机地结合了DY算法和HS算法的优点,并采用非单调线搜索技术在较弱条件下证明了算法的全局收敛性.数值实验表明新算法具有良好的计算效能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号