首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Global convergence is proved for a partitioned BFGS algorithm, when applied on a partially separable problem with a convex decomposition. This case convers a known practical optimization method for large dimensional unconstrained problems. Inexact solution of the linear system defining the search direction and variants of the steplength rule are also shown to be acceptable without affecting the global convergence properties.  相似文献   

2.
周群艳  陈俊 《应用数学》2012,25(1):202-208
本文提出一种新的解大规模无约束优化问题的全局收敛的梯度法.新算法沿着负梯度方向选择步长,而初始步长根据目标函数的海赛矩阵的近似数量矩阵来确定.理论上证明了新算法产生的点列的每个聚点都是稳定的,数值试验表明新算法是可靠且有效的.  相似文献   

3.
In this paper, an adaptive nonmonotone line search method for unconstrained minimization problems is proposed. At every iteration, the new algorithm selects only one of the two directions: a Newton-type direction and a negative curvature direction, to perform the line search. The nonmonotone technique is included in the backtracking line search when the Newton-type direction is the search direction. Furthermore, if the negative curvature direction is the search direction, we increase the steplength under certain conditions. The global convergence to a stationary point with second-order optimality conditions is established. Some numerical results which show the efficiency of the new algorithm are reported.   相似文献   

4.
万中  冯冬冬 《计算数学》2011,33(4):387-396
基于非单调线搜索在寻求优化问题最优解中的优越性,提出了一类新的非单调保守BFGS算法.同已有方法不同,该算法中用来控制非单调性程度的算法参数不是取固定值,而是利用已有目标函数和梯度函数的信息自动调整其取值,以改善算法的数值表现.在合适的假设条件下,建立了新的非单调保守BFGS算法的全局收敛性.用基准测试优化问题测试了算...  相似文献   

5.
In this paper,an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed.By using the negative curvature information from the Hessian, the sequence generated is shown to converge to a stationary point with the second order optimality conditions.Numerical tests on a set of standard test problems confirm the efficiency of our new method.  相似文献   

6.
In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein’s line search is proposed. By using the negative curvature information from the Hessian, the sequence generated is shown to converge to a stationary point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency of our new method. This work was supported by the National Natural Science Foundation of China (Grant No. 10231060) and the Specialized Research Fund of Doctoral Program of Higher Education of China (Grant No. 20040319003)  相似文献   

7.
Recently, in [12] a very general class oftruncated Newton methods has been proposed for solving large scale unconstrained optimization problems. In this work we present the results of an extensive numericalexperience obtained by different algorithms which belong to the preceding class. This numerical study, besides investigating which arethe best algorithmic choices of the proposed approach, clarifies some significant points which underlies every truncated Newton based algorithm.  相似文献   

8.
On the limited memory BFGS method for large scale optimization   总被引:60,自引:0,他引:60  
We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.This work was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contract DE-FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071.  相似文献   

9.
本构造一个求解非线性无约束优化问题的免梯度算法,该算法基于传统的模矢法,每次不成功迭代后,充分利用已有迭代点的信息,构造近似下降方向,产生新的迭代点。在较弱条件下,算法是总体收敛的。通过数值实验与传统模矢法相比,计算量明显减少。  相似文献   

10.
梯度法简述     
孙聪  张亚 《运筹学学报》2021,25(3):119-132
梯度法是一类求解优化问题的一阶方法。梯度法形式简单、计算开销小,在大规模问题的求解中得到了广泛应用。系统地介绍了光滑无约束问题梯度法的迭代格式、理论框架。梯度法中最重要的参数是步长,步长的选取直接决定了梯度法的收敛性质与收敛速度。从线搜索框架、近似技巧、随机技巧和交替重复步长四方面介绍了梯度步长的构造思想及相应梯度法的收敛性结果,还对非光滑及约束问题的梯度法、梯度法加速技巧和随机梯度法等扩展方向做了简要介绍。  相似文献   

11.
By making a convex combination of the modified secant equations proposed by Yuan and Wei et al.,a hybrid secant equation and also,a modified BFGS algorithm is proposed.The hybridization parameter is effectively computed using the available information of recent iterations.Under proper conditions,it is shown that the proposed algorithm is globally,locally and superlinearly convergent.By using the performance profile introduced by Dolan and Mor'e,a comparison between the implementations of the proposed algori...  相似文献   

12.
In this paper, an unconstrained minimization algorithm is defined in which a nonmonotone line search technique is employed in association with a truncated Newton algorithm. Numerical results obtained for a set of standard test problems are reported which indicate that the proposed algorithm is highly effective in the solution of illconditioned as well as of large dimensional problems.  相似文献   

13.
In this paper we deal with the iterative computation of negative curvature directions of an objective function, within large scale optimization frameworks. In particular, suitable directions of negative curvature of the objective function represent an essential tool, to guarantee convergence to second order critical points. However, an “adequate” negative curvature direction is often required to have a good resemblance to an eigenvector corresponding to the smallest eigenvalue of the Hessian matrix. Thus, its computation may be a very difficult task on large scale problems. Several strategies proposed in literature compute such a direction relying on matrix factorizations, so that they may be inefficient or even impracticable in a large scale setting. On the other hand, the iterative methods proposed either need to store a large matrix, or they need to rerun the recurrence. On this guideline, in this paper we propose the use of an iterative method, based on a planar Conjugate Gradient scheme. Under mild assumptions, we provide theory for using the latter method to compute adequate negative curvature directions, within optimization frameworks. In our proposal any matrix storage is avoided, along with any additional rerun.  相似文献   

14.
锥模型优化方法是一类非二次模型优化方法, 它在每次迭代中比标准的二次模型方法含有更丰富的插值信息. Di 和Sun (1996) 提出了解无约束优化问题的锥模型信赖域方法. 本文根据Fletcher 和Leyffer (2002) 的过滤集技术的思想, 在Di 和Sun (1996) 工作的基础上, 提出了解无约束优化问题的基于锥模型的过滤集信赖域算法. 在适当的条件下, 我们证明了新算法的收敛性. 有限的数值试验结果表明新算法是有效的.  相似文献   

15.
基于非单调线搜索技术和IMPBOT算法,提出了一个求解无约束优化问题的ODE型混合方法.该方法的主要特点是:为了求得试验步,该方法在每次迭代时不必求解带信赖域界的子问题,仅需要求解一线性方程组系统;当试验步不被接受时,该方法就执行改进的Wolfe-型非单调线搜索来获得下一个新的迭代点,从而避免了反复求解线性方程组系统. 在一定条件下,所提算法还是整体收敛和超线性收敛的. 数值试验结果表明该方法是有效的.  相似文献   

16.
《Optimization》2012,61(2):163-179
In this article, we consider the global convergence of the Polak–Ribiére–Polyak conjugate gradient method (abbreviated PRP method) for minimizing functions that have Lipschitz continuous partial derivatives. A novel form of non-monotone line search is proposed to guarantee the global convergence of the PRP method. It is also shown that the PRP method has linear convergence rate under some mild conditions when the non-monotone line search reduces to a related monotone line search. The new non-monotone line search needs to estimate the Lipschitz constant of the gradients of objective functions, for which two practical estimations are proposed to help us to find a suitable initial step size for the PRP method. Numerical results show that the new line search approach is efficient in practical computation.  相似文献   

17.
In this paper, we analyze the global convergence of the least-change secant method proposed by Dennis and Wolkowicz, when applied to convex objective functions. One of the most distinguished features of this method is that the Dennis-Wolkowicz update doesn't necessarily belong to the Broyden convex family and can be close to the DFP update, but it still has the self-correcting property. We prove that, for convex objective functions, this method with the commonly used Wolfe line search is globally convergent. We also provide some numerical results.  相似文献   

18.
The cyclic Barzilai--Borwein method for unconstrained optimization   总被引:1,自引:0,他引:1  
** Email: dyh{at}lsec.cc.ac.cn*** Email: hager{at}math.ufl.edu**** Email: klaus.schittkowski{at}uni-bayreuth.de***** Email: hzhang{at}math.ufl.edu In the cyclic Barzilai–Borwein (CBB) method, the sameBarzilai–Borwein (BB) stepsize is reused for m consecutiveiterations. It is proved that CBB is locally linearly convergentat a local minimizer with positive definite Hessian. Numericalevidence indicates that when m > n/2 3, where n is the problemdimension, CBB is locally superlinearly convergent. In the specialcase m = 3 and n = 2, it is proved that the convergence rateis no better than linear, in general. An implementation of theCBB method, called adaptive cyclic Barzilai–Borwein (ACBB),combines a non-monotone line search and an adaptive choice forthe cycle length m. In numerical experiments using the CUTErtest problem library, ACBB performs better than the existingBB gradient algorithm, while it is competitive with the well-knownPRP+ conjugate gradient algorithm.  相似文献   

19.
董丽  周金川 《数学杂志》2015,35(1):173-179
本文研究了无约束优化问题.利用当前和前面迭代点的信息以及曲线搜索技巧产生新的迭代点,得到了一个新的求解无约束优化问题的下降方法.在较弱条件下证明了算法具有全局收敛性.当目标函数为一致凸函数时,证明了算法具有线性收敛速率.初步的数值试验表明算法是有效的.  相似文献   

20.
This paper considers simple modifications of the limited memory BFGS (L-BFGS) method for large scale optimization. It describes algorithms in which alternating ways of re-using a given set of stored difference vectors are outlined. The proposed algorithms resemble the L-BFGS method, except that the initial Hessian approximation is defined implicitly like the L-BFGS Hessian in terms of some stored vectors rather than the usual choice of a multiple of the unit matrix. Numerical experiments show that the new algorithms yield desirable improvement over the L-BFGS method. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号