首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
由William W.Hager和张洪超提出的一种新的共轭梯度法(简称HZ方法),已被证明是一种有效的方法.本文证明了HZ共轭梯度法在Armijo型线性搜索下的全局收敛性.数值实验显示,在Armijo型线性搜索下的HZ共轭梯度法比在Wolfe线性搜索下更有效.  相似文献   

2.
限制PR共轭梯度法及其全局收敛性   总被引:5,自引:0,他引:5  
时贞军 《数学进展》2002,31(1):47-55
PR共轭梯度法是求解大型无约束优化问题的有效算法之一,但是算法的全局收敛性在理论上一直没有得到解决。本文将PR共轭梯度法中的参数β加以限制,提出了限制R共轭梯度法,证明了Armijo搜索下算法的全局收敛性、数值试验表明算法是很有效的。  相似文献   

3.
共轭梯度法是求解无约束优化问题的一种重要的方法.本文提出一族新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性.最后对算法进行了数值实验,实验结果验证了该算法的有效性.  相似文献   

4.
共轭梯度法是求解大规模无约束优化问题的经典方法之一.基于搜索方向矩阵的谱条件数,给出了一个Dai-Liao(DL)共轭梯度法中参数的自适应形式,提出一种自适应DL共轭梯度算法.在适当的条件下,对于一致凸的目标函数证明了该方法具有全局收敛性.数值结果表明,提出的方法是可行的.  相似文献   

5.
连淑君  王长钰 《应用数学》2007,20(1):120-127
本文我们讨论了一簇共轭梯度法,它可被看作是FR法和DY法的凸组合.我们提出了两种Armijo型线搜索,并在这两种线搜索下,讨论了共轭梯度法簇的全局收敛性.  相似文献   

6.
共轭梯度法是求解无约束优化问题的一种重要的方法,尤其适用于大规模优化问题的求解.本文提出一族包含FR方法和CD方法的新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性.最后对算法进行了数值试验,试验结果验证了该算法的有效性。  相似文献   

7.
一族新的共轭梯度法的全局收敛性   总被引:1,自引:0,他引:1  
共轭梯度法是求解无约束优化问题的一种重要的方法,尤其适用于大规模优化问题的求解。本文提出一族新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性。最后对算法进行了数值试验,试验结果验证了该算法的有效性。  相似文献   

8.
一族新共轭梯度法的全局收敛性   总被引:4,自引:0,他引:4  
杜学武  徐成贤 《数学研究》1999,32(3):277-280
提出求解无约束优化问题的一族新共轭梯度法,证明了它的一个子族在一种非精确线搜索下的下降性和全局收敛性  相似文献   

9.
邓松海  万中 《计算数学》2012,34(3):297-308
提出了求解无约束优化问题的新型DL共轭梯度方法. 同已有方法不同之处在于,该方法构造了一种修正的Armijo线搜索规则,它不仅能给出当前迭代步步长, 而且还能同时确定计算下一步搜索方向时需要用到的共轭参数值. 在较弱的条件下, 建立了算法的全局收敛性理论. 数值试验表明,新型共轭梯度算法比同类方法具有更好的计算效率.  相似文献   

10.
本文在几种常见的Armijo型线搜索基础上,提出了一种新的Armijo型线搜索条件,并证明了由Du等人提出的杂交共轭梯度法的全局收敛性。数值实验表明新方法对于给定的测试函数是有效的。  相似文献   

11.
In this paper, we propose a new nonmonotone Armijo type line search and prove that the MBFGS method proposed by Li and Fukushima with this new line search converges globally for nonconvex minimization. Some numerical experiments show that this nonmonotone MBFGS method is efficient for the given test problems.  相似文献   

12.
It is well-known that the HS method and the PRP method may not converge for nonconvex optimization even with exact line search. Some globalization techniques have been proposed, for instance, the PRP+ globalization technique and the Grippo-Lucidi globalization technique for the PRP method. In this paper, we propose a new efficient globalization technique for general nonlinear conjugate gradient methods for nonconvex minimization. This new technique utilizes the information of the previous search direction sufficiently. Under suitable conditions, we prove that the nonlinear conjugate gradient methods with this new technique are globally convergent for nonconvex minimization if the line search satisfies Wolfe conditions or Armijo condition. Extensive numerical experiments are reported to show the efficiency of the proposed technique.  相似文献   

13.
To guarantee global convergence of the standard (unmodified) PRP nonlinear conjugate gradient method for unconstrained optimization, the exact line search or some Armijo type line searches which force the PRP method to generate descent directions have been adopted. In this short note, we propose a non-descent PRP method in another way. We prove that the unmodified PRP method converges globally even for nonconvex minimization by the use of an approximate descent inexact line search.  相似文献   

14.
In this paper, based on a new class of conjugate gradient methods which are proposed by Rivaie, Dai and Omer et al. we propose a class of improved conjugate gradient methods for nonconvex unconstrained optimization. Different from the above methods, our methods possess the following properties: (i) the search direction always satisfies the sufficient descent condition independent of any line search; (ii) these approaches are globally convergent with the standard Wolfe line search or standard Armijo line search without any convexity assumption. Moreover, our numerical results also demonstrated the efficiencies of the proposed methods.  相似文献   

15.
On the Nonmonotone Line Search   总被引:10,自引:0,他引:10  
The technique of nonmonotone line search has received many successful applications and extensions in nonlinear optimization. This paper provides some basic analyses of the nonmonotone line search. Specifically, we analyze the nonmonotone line search methods for general nonconvex functions along different lines. The analyses are helpful in establishing the global convergence of a nonmonotone line search method under weaker conditions on the search direction. We explore also the relations between nonmonotone line search and R-linear convergence assuming that the objective function is uniformly convex. In addition, by taking the inexact Newton method as an example, we observe a numerical drawback of the original nonmonotone line search and suggest a standard Armijo line search when the nonmonotone line search condition is not satisfied by the prior trial steplength. The numerical results show the usefulness of such suggestion for the inexact Newton method.  相似文献   

16.
Although the study of global convergence of the Polak–Ribière–Polyak (PRP), Hestenes–Stiefel (HS) and Liu–Storey (LS) conjugate gradient methods has made great progress, the convergence of these algorithms for general nonlinear functions is still erratic, not to mention under weak conditions on the objective function and weak line search rules. Besides, it is also interesting to investigate whether there exists a general method that converges under the standard Armijo line search for general nonconvex functions, since very few relevant results have been achieved. So in this paper, we present a new general form of conjugate gradient methods whose theoretical significance is attractive. With any formula β k  ≥ 0 and under weak conditions, the proposed method satisfies the sufficient descent condition independently of the line search used and the function convexity, and its global convergence can be achieved under the standard Wolfe line search or even under the standard Armijo line search. Based on this new method, convergence results on the PRP, HS, LS, Dai–Yuan–type (DY) and Conjugate–Descent–type (CD) methods are established. Preliminary numerical results show the efficiency of the proposed methods.  相似文献   

17.
Descent property is very important for an iterative method to be globally convergent. In this paper, we propose a way to construct sufficient descent directions for unconstrained optimization. We then apply the technique to derive a PSB (Powell-Symmetric-Broyden) based method. The PSB based method locally reduces to the standard PSB method with unit steplength. Under appropriate conditions, we show that the PSB based method with Armijo line search or Wolfe line search is globally and superlinearly convergent for uniformly convex problems. We also do some numerical experiments. The results show that the PSB based method is competitive with the standard BFGS method.  相似文献   

18.
In this article, based on the modified secant equation, we propose a modified Hestenes-Stiefel (HS) conjugate gradient method which has similar form as the CG-DESCENT method proposed by Hager and Zhang (SIAM J Optim 16:170–192, 2005). The presented method can generate sufficient descent directions without any line search. Under some mild conditions, we show that it is globally convergent with Armijo line search. Moreover, the R-linear convergence rate of the modified HS method is established. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known CG-DESCENT method.  相似文献   

19.
A nonconvex optimal control problem is examined for a system that is linear with respect to state and has a terminal objective functional representable as the difference of two convex functions. A new local search method is proposed, and its convergence is proved. A strategy is also developed for the search of a globally optimal control process, because the Pontryagin and Bellman principles as applied to the above problem do not distinguish between the locally and globally optimal processes. The convergence of this strategy under appropriate conditions is proved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号