首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
优化算法的收敛性分析是优化中很重要的一个领域,然而收敛性并不足以作为比较不同算法效率的标准,因此需要另外一套衡量优化问题难易程度以及优化算法效率高低的理论,这套理论被称为优化算法的复杂度分析理论.本文共分为5个部分.第1节介绍复杂度分析的背景和理论框架,给出复杂度分析的定义、方法和例子,并总结本文中的复杂度结论.第2节介绍光滑优化问题的复杂度分析,给出不同优化问题的复杂度上界和下界,并给出加速梯度法收敛性分析的框架.第3节介绍非光滑优化问题的复杂度上界,介绍次梯度法、重心法、椭球法和近似点梯度法的复杂度分析.第4节介绍条件梯度法的复杂度分析,介绍条件梯度法的复杂度上界和下界,以及加速条件梯度法的框架.第5节介绍随机优化算法的复杂度分析,比较随机优化算法在凸和非凸问题下收敛的置信水平和复杂度.  相似文献   

2.
刘亚君  刘新为 《计算数学》2016,38(1):96-112
梯度法是求解无约束最优化的一类重要方法.步长选取的好坏与梯度法的数值表现息息相关.注意到BB步长隐含了目标函数的二阶信息,本文将BB法与信赖域方法相结合,利用BB步长的倒数去近似目标函数的Hesse矩阵,同时利用信赖域子问题更加灵活地选取梯度法的步长,给出求解无约束最优化问题的单调和非单调信赖域BB法.在适当的假设条件下,证明了算法的全局收敛性.数值试验表明,与已有的求解无约束优化问题的BB类型的方法相比,非单调信赖域BB法中e_k=‖x_k-x~*‖的下降呈现更明显的阶梯状和单调性,因此收敛速度更快.  相似文献   

3.
受性能估计问题(PEP)方法的启发,通过考察最坏函数误差的收敛边界(即效率),优化了迭代点对应的梯度满足Q-线性收敛的光滑凸极小化的一阶方法的步长系数.介绍新的有效的一阶方法,称为QGM,具有与优化梯度法(OGM)类似的计算有效形式.  相似文献   

4.
提出一类新的求解无约束优化问题的记忆梯度法,在较弱条件下证明了算法具有全局收敛性和线性收敛速率.算法采用曲线搜索方法,在每一步同时确定搜索方向和步长,收敛稳定,并且不需计算和存储矩阵,适于求解大规模优化问题.数值试验表明算法是有效的.  相似文献   

5.
许多现代统计和信号应用问题都可以归结为非光滑凸优化问题,该文提出了一类适用于求解非光滑凸优化问题的修正邻近梯度法.算法的特点是采用一个自适应步长,并且该算法的线性收敛性不需要目标函数的强凸性作为前提.  相似文献   

6.
增广Lagrange方法是求解非线性规划的一种有效方法.从一新的角度证明不等式约束非线性非光滑凸优化问题的增广Lagrange方法的收敛性.用常步长梯度法的收敛性定理证明基于增广Lagrange函数的对偶问题的常步长梯度方法的收敛性,由此得到增广Lagrange方法乘子迭代的全局收敛性.  相似文献   

7.
本文研究了求解加权线性互补问题的光滑牛顿法.利用一类光滑函数将加权线性互补问题等价转化成一个光滑方程组,然后提出一个新的光滑牛顿法去求解它.在适当条件下,证明了算法具有全局和局部二次收敛性质.与现有的光滑牛顿法不同,我们的算法采用一个非单调无导数线搜索技术去产生步长,从而具有更好的收敛性质和实际计算效果.  相似文献   

8.
反演二维瞬态热传导问题随温度变化的导热系数   总被引:1,自引:0,他引:1  
基于边界元法反演二维瞬态热传导问题随温度变化的导热系数.采用Kirchhoff变换将非线性的控制方程转变为线性方程.边界元法用于构建二维瞬态热传导问题的数值分析模型.将反演参数作为优化变量,测点温度计算值与测量值之间的残差平方和作为优化目标函数.引入复变量求导法求解目标函数的梯度矩阵,梯度正则化法用于优化目标函数获得反演结果.探讨时间步长、测点数量和随机偏差对反演结果的影响.减小步长、增加测点数量收敛速度加快.降低了随机偏差,计算结果更精确.算例证明了算法的有效性与稳定性.  相似文献   

9.
近年来,随机方差缩减类算法在求解机器学习中的大规模优化问题时得到了广泛应用.但是如何选择此类算法的合适步长依然是值得研究的问题.受启发于结合Barzilai-Borwein步长的随机方差缩减梯度(stochastic variance reduced gradient with Barzilai-Borwein step size, SVRG-BB)算法,本文针对方差缩减类算法提出基于局部Lipschitz常数估计的自适应步长,并通过构建一个极小极大化问题给出该步长应用于不同算法时的参数选取方法.然后将该步长与随机递归梯度算法(stochastic recursive gradient algorithm, SARAH)和随机方差缩减(stochastic variance reduced gradient, SVRG)算法相结合,分别提出结合自适应步长的随机递归梯度(SARAH with adaptive step size, SARAH-AS)方法和结合自适应步长的随机方差缩减梯度(SVRG with adaptive step size, SVRG-AS)算法,并且在强凸假设下证明以上算法点距离序列的线性收敛性质.此外,本文还提供一个新颖的视角揭示为什么SARAH+算法是有效的.在公开数据集上的数值实验结果表明本文提出的自适应步长在方差缩减类算法中表现良好.  相似文献   

10.
本文提出一个求解非光滑凸优化问题非精确梯度镜面下降算法.该算法是Allen-Zhu2016年提出求解光滑凸优化问题梯度镜面下降算法的推广,而且该算法允许目标函数中光滑部分梯度计算和非光滑部分邻近算子计算都存在误差,并且在适当条件下分析了该算法函数值序列的O(1/(k2))收敛速度,这里k表示迭代数.最后关于Lasso问题和Logistic问题的数值结果表明该算法是有效的.  相似文献   

11.
The trust region(TR) method for optimization is a class of effective methods.The conic model can be regarded as a generalized quadratic model and it possesses the good convergence properties of the quadratic model near the minimizer.The Barzilai and Borwein(BB) gradient method is also an effective method,it can be used for solving large scale optimization problems to avoid the expensive computation and storage of matrices.In addition,the BB stepsize is easy to determine without large computational efforts.In this paper,based on the conic trust region framework,we employ the generalized BB stepsize,and propose a new nonmonotone adaptive trust region method based on simple conic model for large scale unconstrained optimization.Unlike traditional conic model,the Hessian approximation is an scalar matrix based on the generalized BB stepsize,which resulting a simple conic model.By adding the nonmonotone technique and adaptive technique to the simple conic model,the new method needs less storage location and converges faster.The global convergence of the algorithm is established under certain conditions.Numerical results indicate that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

12.
基于修正拟牛顿方程, 利用Goldstein-Levitin-Polyak (GLP)投影技术, 建立了 求解带凸集约束的优化问题的两阶段步长Zhang H.C.非单调变尺度梯度投影方法, 证明了算法的全局收敛性. 数值实验表明算法是有效的, 适合求解大规模问题.  相似文献   

13.
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD   总被引:8,自引:0,他引:8  
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.  相似文献   

14.
基于修正拟牛顿方程,利用Goldstein-Levitin-Polyak(GLP)投影技术,建立了求解带凸集约束的优化问题的两阶段步长非单调变尺度梯度投影算法,证明了算法的全局收敛性和一定条件下的Q超线性收敛速率.数值结果表明新算法是有效的,适合求解大规模问题.  相似文献   

15.
It is well-known that the conjugate gradient method is widely used for solving large scale optimization problems. In this paper a modified trust-region method with Beale’s Preconditioned Conjugate Gradient (BPCG) technique is developed for solving unconstrained optimization problems. The modified version adopts an adaptive rule and retains some useful information when an unsuccessful iteration occurs, and therefore improves the efficiency of the method. The behavior and the convergence properties are discussed. Some numerical experiments are reported. This work was partially supported by Grant of the National Natural Science Foundation of China, Grant: 20040319003 of the Doctoral Site of the Education Ministry of China, and SRG: 7001428 of City University of Hong Kong.  相似文献   

16.
研究一类新的求解无约束优化问题的超记忆梯度法,分析了算法的全局收敛性和线性收敛速率.算法利用一种多步曲线搜索准则产生新的迭代点,在每步迭代时同时确定下降方向和步长,并且不用计算和存储矩阵,适于求解大规模优化问题.数值试验表明算法是有效的.  相似文献   

17.
本文研究球面上的$\ell_1$正则优化问题,其目标函数由一般光滑函数项和非光滑$\ell_1$正则项构成,且假设光滑函数的随机梯度可由随机一阶oracle估计.这类优化问题被广泛应用在机器学习,图像、信号处理和统计等领域.根据流形临近梯度法和随机梯度估计技术,提出一种球面随机临近梯度算法.基于非光滑函数的全局隐函数定理,分析了子问题解关于参数的Lipschtiz连续性,进而证明了算法的全局收敛性.在基于随机数据集和实际数据集的球面$\ell_1$正则二次规划问题、有限和SPCA问题和球面$\ell_1$正则逻辑回归问题上数值实验结果显示所提出的算法与流形临近梯度法、黎曼随机临近梯度法相比CPU时间上具有一定的优越性.  相似文献   

18.
Adaptive Two-Point Stepsize Gradient Algorithm   总被引:7,自引:0,他引:7  
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method.  相似文献   

19.
《Optimization》2012,61(4-5):395-415
The Barzilai and Borwein (BB) gradient method does not guarantee a descent in the objective function at each iteration, but performs better than the classical steepest descent (SD) method in practice. So far, the BB method has found many successful applications and generalizations in linear systems, unconstrained optimization, convex-constrained optimization, stochastic optimization, etc. In this article, we propose a new gradient method that uses the SD and the BB steps alternately. Hence the name “alternate step (AS) gradient method.” Our theoretical and numerical analyses show that the AS method is a promising alternative to the BB method for linear systems. Unconstrained optimization algorithms related to the AS method are also discussed. Particularly, a more efficient gradient algorithm is provided by exploring the idea of the AS method in the GBB algorithm by Raydan (1997).

To establish a general R-linear convergence result for gradient methods, an important property of the stepsize is drawn in this article. Consequently, R-linear convergence result is established for a large collection of gradient methods, including the AS method. Some interesting insights into gradient methods and discussion about monotonicity and nonmonotonicity are also given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号