首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 62 毫秒
1.
徐泽水 《数学杂志》2002,22(1):27-30
本文提出了一类新的共轭梯度法,在算法的迭代过程中,迭代方向保持下降性,并在一类非精确性搜索条件下证明了其全局收敛性。  相似文献   

2.
对一类在压缩感知、图像处理等相关领域有广泛应用的特殊非光滑优化问题进行了研究,给出了求解此类问题的光滑梯度法及算法的全局收敛性证明,相关的数值实验表明算法的有效性.  相似文献   

3.
郭洁  万中 《计算数学》2022,44(3):324-338
基于指数罚函数,对最近提出的一种求解无约束优化问题的三项共轭梯度法进行了修正,并用它求解更复杂的大规模极大极小值问题.证明了该方法生成的搜索方向对每一个光滑子问题是充分下降方向,而且与所用的线搜索规则无关.以此为基础,设计了求解大规模极大极小值问题的算法,并在合理的假设下,证明了算法的全局收敛性.数值实验表明,该算法优于文献中已有的类似算法.  相似文献   

4.
一类非精确线性搜索共轭梯度新算法   总被引:4,自引:0,他引:4  
本文通过对迭代参数的适当选取,给出了一类共轭梯度新算法。在算法的迭代过程中,迭代方向保持下降性,在一般的非精确线性搜索条件下,算法的全局收敛性得到了证明。  相似文献   

5.
在Goldstein搜索下一类共轭梯度法的全局收敛性   总被引:3,自引:0,他引:3  
徐泽水 《数学杂志》2000,20(1):13-16
本文证明了文「1」提出的一类共轭梯度法在Goldstein非精确线性搜索下具有全局收敛性。  相似文献   

6.
共轭梯度法是求解大规模元约束优化同题的一种有效方法,本文提出一种新的共轭梯度法,证明了在推广的Wolfe线搜索条件下方法具有全局收敛性。最后对算法进行了数值试验,试验结果表明该算法具有良好的收敛性和有效性。  相似文献   

7.
一类共轭梯度法的全局收敛性结果   总被引:3,自引:0,他引:3  
本文证明了在Grippo-Lucidi线搜索下当βk取βk=σ1βPRPk+σ2βnewk,其中σ1≥0,σ2≥0,σ1+σ2>0,βnewk=gTk(gk-gk-1)/-dTk-1gk-1时一类共轭梯度法的全局收敛性,并给出了此类方法良好的数值效果.  相似文献   

8.
周光明  黄云清 《数学杂志》2006,26(2):191-196
本文提出了一种计算共轭梯度法中主要参数βk的新形式,它的计算与目标函数的下降量有关.并且还构造了它的一种杂交形式.利用了βk的新形式及其杂交形式的共轭梯度法都是收敛的.大量的数值实验表明它们是非常有效和稳健的,能用于大规模科学计算.  相似文献   

9.
一类新的共轭投影梯度算法   总被引:2,自引:0,他引:2  
本文利用[5]引进的共轭投影的概念,结合堵丁柱[3]中的思想,提出一类新的共轭梯度投影算法.在一定的条件下,证明了该算法具有全局收敛性和超线性收敛速度.  相似文献   

10.
李向利  赵文娟 《应用数学》2020,33(2):436-442
共轭梯度法是一种解决大规模无约束优化问题的重要方法.本文对Dai-Liao (DL)共轭梯度法的参数进行了研究,提出了一种新的自适应DL共轭梯度法.在适当的条件下,证明了该方法的全局收敛性.数值结果表明,我们的方法对给定的测试问题是有效的.  相似文献   

11.
《Optimization》2012,61(12):2679-2691
In this article, we present an improved three-term conjugate gradient algorithm for large-scale unconstrained optimization. The search directions in the developed algorithm are proved to satisfy an approximate secant equation as well as the Dai-Liao’s conjugacy condition. With the standard Wolfe line search and the restart strategy, global convergence of the algorithm is established under mild conditions. By implementing the algorithm to solve 75 benchmark test problems with dimensions from 1000 to 10,000, the obtained numerical results indicate that the algorithm outperforms the state-of-the-art algorithms available in the literature. It costs less CPU time and smaller number of iterations in solving the large-scale unconstrained optimization.  相似文献   

12.
《Optimization》2012,61(2):163-179
In this article, we consider the global convergence of the Polak–Ribiére–Polyak conjugate gradient method (abbreviated PRP method) for minimizing functions that have Lipschitz continuous partial derivatives. A novel form of non-monotone line search is proposed to guarantee the global convergence of the PRP method. It is also shown that the PRP method has linear convergence rate under some mild conditions when the non-monotone line search reduces to a related monotone line search. The new non-monotone line search needs to estimate the Lipschitz constant of the gradients of objective functions, for which two practical estimations are proposed to help us to find a suitable initial step size for the PRP method. Numerical results show that the new line search approach is efficient in practical computation.  相似文献   

13.
Yanyun Ding  Jianwei Li 《Optimization》2017,66(12):2309-2328
The recent designed non-linear conjugate gradient method of Dai and Kou [SIAM J Optim. 2013;23:296–320] is very efficient currently in solving large-scale unconstrained minimization problems due to its simpler iterative form, lower storage requirement and its closeness to the scaled memoryless BFGS method. Just because of these attractive properties, this method was extended successfully to solve higher dimensional symmetric non-linear equations in recent years. Nevertheless, its numerical performance in solving convex constrained monotone equations has never been explored. In this paper, combining with the projection method of Solodov and Svaiter, we develop a family of non-linear conjugate gradient methods for convex constrained monotone equations. The proposed methods do not require the Jacobian information of equations, and even they do not store any matrix in each iteration. They are potential to solve non-smooth problems with higher dimensions. We prove the global convergence of the class of the proposed methods and establish its R-linear convergence rate under some reasonable conditions. Finally, we also do some numerical experiments to show that the proposed methods are efficient and promising.  相似文献   

14.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

15.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

16.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

17.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.  相似文献   

18.
Conjugate gradient methods have played a special role in solving large scale nonlinear problems. Recently, the author and Dai proposed an efficient nonlinear conjugate gradient method called CGOPT, through seeking the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method. In this paper, we make use of two types of modified secant equations to improve CGOPT method. Under some assumptions, the improved methods are showed to be globally convergent. Numerical results are also reported.  相似文献   

19.
《Optimization》2012,61(4):1011-1031
This article deals with the conjugate gradient method on a Riemannian manifold with interest in global convergence analysis. The existing conjugate gradient algorithms on a manifold endowed with a vector transport need the assumption that the vector transport does not increase the norm of tangent vectors, in order to confirm that generated sequences have a global convergence property. In this article, the notion of a scaled vector transport is introduced to improve the algorithm so that the generated sequences may have a global convergence property under a relaxed assumption. In the proposed algorithm, the transported vector is rescaled in case its norm has increased during the transport. The global convergence is theoretically proved and numerically observed with examples. In fact, numerical experiments show that there exist minimization problems for which the existing algorithm generates divergent sequences, but the proposed algorithm generates convergent sequences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号