首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
本文提出的算法用于求解非线性规划(P):其中,f:R~n→R~1,g:R~n→R~m。 在算法的每一循环中,求解如下带有扰动项r_i的二次规划Q(z_i,H_i,r_i):  相似文献   

2.
对于无约束优化问题,提出了一类新的三项记忆梯度算法.这类算法是在参数满足某些假设的条件下,确定它的取值范围,从而保证三项记忆梯度方向是使目标函数充分下降的方向.在非单调步长搜索下讨论了算法的全局收敛性.为了得到具有更好收敛性质的算法,结合Solodov and Svaiter(2000)中的部分技巧,提出了一种新的记忆梯度投影算法,并证明了该算法在函数伪凸的情况下具有整体收敛性.  相似文献   

3.
通过求解带有罚参数的优化问题设计共轭梯度法是一种新思路.基于Fatemi的优化问题求解,通过估计步长和选择合适的罚参数建立一个谱三项共轭梯度法,为证得算法的全局收敛性对谱参数进行修正.在标准Wolfe线搜索下证明了该谱三项共轭梯度算法的充分下降性以及全局收敛性.最后,在选取相同算例的多个算法测试结果中表明新方法数值试验性能表现良好.  相似文献   

4.
本文提出了两种搜索方向带有扰动项的Fletcher-Reeves (abbr. FR)共轭梯度法.其迭代公式为xk 1=xk αk(sk ωk),其中sk由共轭梯度迭代公式确定,ωk为扰动项,αk采用线搜索确定而不是必须趋于零.我们在很一般的假设条件下证明了两种算法的全局收敛性,而不需要目标函数有下界或水平集有界等有界性条件.  相似文献   

5.
利用广义投影矩阵,对求解无约束规划的三项记忆梯度算法中的参数给一条件,确定它们的取值范围,以保证得到目标函数的三项记忆梯度广义投影下降方向,建立了求解非线性等式和不等式约束优化问题的三项记忆梯度广义投影算法,并证明了算法的收敛性.同时给出了结合FR,PR,HS共轭梯度参数的三项记忆梯度广义投影算法,从而将经典的共轭梯度算法推广用于求解约束规划问题.数值例子表明算法是有效的.  相似文献   

6.
Pi-sigma神经网络的带动量项的异步批处理梯度算法收敛性   总被引:1,自引:0,他引:1  
熊焱  张超 《应用数学》2008,21(1):207-212
本文将动量项引入到训练Pi-sigma神经网络的异步批处理的梯度算法中,有效的改善了算法的收敛效率,并从理论上对该算法的收敛性进行研究,给出了误差函数的单调性定理及该算法的弱收敛和强收敛性定理.计算机仿真实验亦验证了带动量项的异步批处理梯度算法的有效性和理论分析的正确性.  相似文献   

7.
本文讨论Rn空间上的无约束极大极小问题. 通过Rn+1空间上的广义梯度投影技术产生Rn上的下降搜索方向,进而结合Armijo非精确线搜索建立了原问题Rn上的一个广义梯度投影型算法.算法在仿射线性无关条件下,具有全局收敛性和强收敛性. 文中对算法进行了初步的数值试验.  相似文献   

8.
本文提出一个求解非线性不等式约束优化问题的带有共轭梯度参数的广义梯度投影算法.算法中的共轭梯度参数是很容易得到的,且算法的初始点可以任意选取.而且,由于算法仅使用前一步搜索方向的信息,因而减少了计算量.在较弱条件下得到了算法的全局收敛性.数值结果表明算法是有效的.  相似文献   

9.
本文对求解无约束优化问题提出一类三项混合共轭梯度算法,新算法将Hestenes- stiefel算法与Dai-Yuan方法相结合,并在不需给定下降条件的情况下,证明了算法在Wolfe线搜索原则下的收敛性,数值试验亦显示出这种混合共轭梯度算法较之HS和PRP的优势.  相似文献   

10.
谱共轭梯度算法是求解大规模无约束最优化问题的有效算法之一.基于Hestenes-Stiefel算法与谱共轭梯度算法,提出一种谱Hestenes-Stiefel共轭梯度算法.在Wolfe线搜索下,算法产生的搜索方向具有下降性质,且全局收敛性也能得到证明.通过对CUTEr函数库中部分著名的函数进行试验,利用著名的DolanMore评价体系,展示了新算法的有效性.  相似文献   

11.
王丽平  陈晓红 《计算数学》2009,31(2):127-136
左共轭梯度法是求解大型稀疏线性方程组的一种新兴的Krylov子空间方法.为克服该算法数值表现不稳定、迭代中断的缺点,本文对原方法进行等价变形,得到左共轭梯度方向的另一迭代格式,给出一个拟极小化左共轭梯度算法.数值结果证实了该变形算法与原算法的相关性.  相似文献   

12.
First-order methods with momentum, such as Nesterov’s fast gradient method, are very useful for convex optimization problems, but can exhibit undesirable oscillations yielding slow convergence rates for some applications. An adaptive restarting scheme can improve the convergence rate of the fast gradient method, when the parameter of a strongly convex cost function is unknown or when the iterates of the algorithm enter a locally strongly convex region. Recently, we introduced the optimized gradient method, a first-order algorithm that has an inexpensive per-iteration computational cost similar to that of the fast gradient method, yet has a worst-case cost function rate that is twice faster than that of the fast gradient method and that is optimal for large-dimensional smooth convex problems. Building upon the success of accelerating the fast gradient method using adaptive restart, this paper investigates similar heuristic acceleration of the optimized gradient method. We first derive a new first-order method that resembles the optimized gradient method for strongly convex quadratic problems with known function parameters, yielding a linear convergence rate that is faster than that of the analogous version of the fast gradient method. We then provide a heuristic analysis and numerical experiments that illustrate that adaptive restart can accelerate the convergence of the optimized gradient method. Numerical results also illustrate that adaptive restart is helpful for a proximal version of the optimized gradient method for nonsmooth composite convex functions.  相似文献   

13.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

14.
孙清滢 《计算数学》2004,26(4):401-412
本文利用广义投影矩阵,对求解无约束规划的超记忆梯度算法中的参数给出一种新的取值范围以保证得到目标函数的超记忆梯度广义投影下降方向,并与处理任意初始点的方法技巧结合建立求解非线性不等式约束优化问题的一个初始点任意的超记忆梯度广义投影算法,在较弱条件下证明了算法的收敛性.同时给出结合FR,PR,HS共轭梯度参数的超记忆梯度广义投影算法,从而将经典的共轭梯度法推广用于求解约束规划问题.数值例子表明算法是有效的.  相似文献   

15.
Conjugate gradient methods are interesting iterative methods that solve large scale unconstrained optimization problems. A lot of recent research has thus focussed on developing a number of conjugate gradient methods that are more effective. In this paper, we propose another hybrid conjugate gradient method as a linear combination of Dai-Yuan (DY) method and the Hestenes-Stiefel (HS) method. The sufficient descent condition and the global convergence of this method are established using the generalized Wolfe line search conditions. Compared to the other conjugate gradient methods, the proposed method gives good numerical results and is effective.  相似文献   

16.
In this paper, we introduce a class of nonmonotone conjugate gradient methods, which include the well-known Polak–Ribière method and Hestenes–Stiefel method as special cases. This class of nonmonotone conjugate gradient methods is proved to be globally convergent when it is applied to solve unconstrained optimization problems with convex objective functions. Numerical experiments show that the nonmonotone Polak–Ribière method and Hestenes–Stiefel method in this nonmonotone conjugate gradient class are competitive vis-à-vis their monotone counterparts.  相似文献   

17.
《Optimization》2012,61(4-5):395-415
The Barzilai and Borwein (BB) gradient method does not guarantee a descent in the objective function at each iteration, but performs better than the classical steepest descent (SD) method in practice. So far, the BB method has found many successful applications and generalizations in linear systems, unconstrained optimization, convex-constrained optimization, stochastic optimization, etc. In this article, we propose a new gradient method that uses the SD and the BB steps alternately. Hence the name “alternate step (AS) gradient method.” Our theoretical and numerical analyses show that the AS method is a promising alternative to the BB method for linear systems. Unconstrained optimization algorithms related to the AS method are also discussed. Particularly, a more efficient gradient algorithm is provided by exploring the idea of the AS method in the GBB algorithm by Raydan (1997).

To establish a general R-linear convergence result for gradient methods, an important property of the stepsize is drawn in this article. Consequently, R-linear convergence result is established for a large collection of gradient methods, including the AS method. Some interesting insights into gradient methods and discussion about monotonicity and nonmonotonicity are also given.  相似文献   

18.
Tensor is a hot topic in the past decade and eigenvalue problems of higher order tensors become more and more important in the numerical multilinear algebra. Several methods for finding the Z-eigenvalues and generalized eigenvalues of symmetric tensors have been given. However, the convergence of these methods when the tensor is not symmetric but weakly symmetric is not assured. In this paper, we give two convergent gradient projection methods for computing some generalized eigenvalues of weakly symmetric tensors. The gradient projection method with Armijo step-size rule (AGP) can be viewed as a modification of the GEAP method. The spectral gradient projection method which is born from the combination of the BB method with the gradient projection method is superior to the GEAP, AG and AGP methods. We also make comparisons among the four methods. Some competitive numerical results are reported at the end of this paper.  相似文献   

19.
对求解无约束规划的超记忆梯度算法中线搜索方向中的参数,给了一个假设条件,从而确定了它的一个新的取值范围,保证了搜索方向是目标函数的充分下降方向,由此提出了一类新的记忆梯度算法.在去掉迭代点列有界和Armijo步长搜索下,讨论了算法的全局收敛性,且给出了结合形如共轭梯度法FR,PR,HS的记忆梯度法的修正形式.数值实验表明,新算法比Armijo线搜索下的FR、PR、HS共轭梯度法和超记忆梯度法更稳定、更有效.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号