首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Armijo线性搜索下Hager-Zhang共轭梯度法的全局收敛性   总被引:2,自引:0,他引:2       下载免费PDF全文
Hager和Zhang[4]提出了一种新的非线性共轭梯度法(简称 HZ 方法), 并证明了该方法在 Wolfe搜索和 Goldstein 搜索下求解强凸问题的全局收敛性.但是HZ方法在标准Armijo 搜索下求解非凸问题是否全局收敛尚不清楚.该文提出了一种保守的HZ共轭梯度法,并且证明了这种方法在 Armijo 线性搜索下求解非凸优化问题的全局收敛性.此外,作者给出了一些 数值结果以检验该方法的有效性.  相似文献   

2.
共轭梯度法是求解大规模无约束优化问题的经典方法之一.基于搜索方向矩阵的谱条件数,给出了一个Dai-Liao(DL)共轭梯度法中参数的自适应形式,提出一种自适应DL共轭梯度算法.在适当的条件下,对于一致凸的目标函数证明了该方法具有全局收敛性.数值结果表明,提出的方法是可行的.  相似文献   

3.
一种改进的共轭梯度法及全局收敛性   总被引:1,自引:0,他引:1  
本文在DY共轭梯度法的基础上对解决无约束最优化问题提出一种改进的共轭梯度法.该方法在Wolfe线搜索下能够保证充分下降性,并在目标函数可微的条件下,证明了算法的全局收敛性.大量数值试验表明,该方法是很有效的.  相似文献   

4.
共轭梯度法是求解无约束优化问题的一种重要的方法,尤其适用于大规模优化问题的求解.本文提出一族包含FR方法和CD方法的新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性.最后对算法进行了数值试验,试验结果验证了该算法的有效性。  相似文献   

5.
在已有文献β■的基础上得到了一个新的谱共轭参数,从而构造了一个新的谱共轭梯度法.并且新方法的搜索方向不需要任何线性搜索条件而自动下降.利用标准Wolfe线搜索,在一般假设条件下,验证了该方法是全局收敛的.  相似文献   

6.
共轭梯度法是求解无约束优化问题的一种重要的方法.本文提出一族新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性.最后对算法进行了数值实验,实验结果验证了该算法的有效性.  相似文献   

7.
共轭梯度法是求解大规模元约束优化同题的一种有效方法,本文提出一种新的共轭梯度法,证明了在推广的Wolfe线搜索条件下方法具有全局收敛性。最后对算法进行了数值试验,试验结果表明该算法具有良好的收敛性和有效性。  相似文献   

8.
黄海 《经济数学》2011,28(2):25-28
在修正PRP共轭梯度法的基础上,提出了求解无约束优化问题的一个充分下降共轭梯度算法,证明了算法在Wolfe线搜索下全局收敛,并用数值实验表明该算法具有较好的数值结果.  相似文献   

9.
对求解无约束规划的超记忆梯度算法中线搜索方向中的参数,给了一个假设条件,从而确定了它的一个新的取值范围,保证了搜索方向是目标函数的充分下降方向,由此提出了一类新的记忆梯度算法.在去掉迭代点列有界和Armijo步长搜索下,讨论了算法的全局收敛性,且给出了结合形如共轭梯度法FR,PR,HS的记忆梯度法的修正形式.数值实验表明,新算法比Armijo线搜索下的FR、PR、HS共轭梯度法和超记忆梯度法更稳定、更有效.  相似文献   

10.
王开荣  吴伟霞 《经济数学》2007,24(4):431-436
共轭梯度法是求解无约束最优化问题的有效方法.本文在βkDY的基础上对βk引入参数,提出了一类新共轭梯度法,并证明其在强Wolfe线性搜索条件下具有充分下降性和全局收敛性.  相似文献   

11.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.  相似文献   

12.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.  相似文献   

13.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

14.
孙清滢 《计算数学》2004,26(4):401-412
本文利用广义投影矩阵,对求解无约束规划的超记忆梯度算法中的参数给出一种新的取值范围以保证得到目标函数的超记忆梯度广义投影下降方向,并与处理任意初始点的方法技巧结合建立求解非线性不等式约束优化问题的一个初始点任意的超记忆梯度广义投影算法,在较弱条件下证明了算法的收敛性.同时给出结合FR,PR,HS共轭梯度参数的超记忆梯度广义投影算法,从而将经典的共轭梯度法推广用于求解约束规划问题.数值例子表明算法是有效的.  相似文献   

15.
Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. In 2001, Dai and Liao have proposed a new conjugate condition, based on it two nonlinear conjugate gradient methods are constructed. With trust region idea, this paper gives a self-adaptive technique for the two methods. The numerical results show that this technique works well for the given nonlinear optimization test problems.  相似文献   

16.
A new family of conjugate gradient methods   总被引:1,自引:0,他引:1  
In this paper we develop a new class of conjugate gradient methods for unconstrained optimization problems. A new nonmonotone line search technique is proposed to guarantee the global convergence of these conjugate gradient methods under some mild conditions. In particular, Polak–Ribiére–Polyak and Liu–Storey conjugate gradient methods are special cases of the new class of conjugate gradient methods. By estimating the local Lipschitz constant of the derivative of objective functions, we can find an adequate step size and substantially decrease the function evaluations at each iteration. Numerical results show that these new conjugate gradient methods are effective in minimizing large-scale non-convex non-quadratic functions.  相似文献   

17.
Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. This paper proposes a new conjugacy condition, which considers an inexact line search scheme but reduces to the old one if the line search is exact. Based on the new conjugacy condition, two nonlinear conjugate gradient methods are constructed. Convergence analysis for the two methods is provided. Our numerical results show that one of the methods is very efficient for the given test problems. Accepted 15 September 2000. Online publication 8 December 2000.  相似文献   

18.
An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization   总被引:22,自引:0,他引:22  
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.  相似文献   

19.
Conjugate gradient methods are appealing for large scale nonlinear optimization problems. Recently, expecting the fast convergence of the methods, Dai and Liao (2001) used secant condition of quasi-Newton methods. In this paper, we make use of modified secant condition given by Zhang et al. (1999) and Zhang and Xu (2001) and propose a new conjugate gradient method following to Dai and Liao (2001). It is new features that this method takes both available gradient and function value information and achieves a high-order accuracy in approximating the second-order curvature of the objective function. The method is shown to be globally convergent under some assumptions. Numerical results are reported.  相似文献   

20.
Minimizing the distance between search direction matrix of the Dai–Liao method and the scaled memoryless BFGS update in the Frobenius norm, and using Powell’s nonnegative restriction of the conjugate gradient parameters, a one-parameter class of nonlinear conjugate gradient methods is proposed. Then, a brief global convergence analysis is made with and without convexity assumption on the objective function. Preliminary numerical results are reported; they demonstrate a proper choice for the parameter of the proposed class of conjugate gradient methods may lead to promising numerical performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号