首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 343 毫秒
1.
非凸函数极小问题的BFGS算法   总被引:1,自引:0,他引:1  
本对于非凸函数的无约束优化问题,给出一类修正的BFGS算法。算法的思想是对非凸函数的近似Hesse矩阵进行修正,得到下降方向,并且保证拟牛顿条件成立,当步长采用线性搜索一般模型时,证明了该算法的局部收敛性。  相似文献   

2.
董丽  周金川 《数学杂志》2015,35(1):173-179
本文研究了无约束优化问题.利用当前和前面迭代点的信息以及曲线搜索技巧产生新的迭代点,得到了一个新的求解无约束优化问题的下降方法.在较弱条件下证明了算法具有全局收敛性.当目标函数为一致凸函数时,证明了算法具有线性收敛速率.初步的数值试验表明算法是有效的.  相似文献   

3.
在本文中,我们给出一个求解无约束优化问题的秩一适定方法,该方法具有下述较好性质:校正矩阵是对称正定的;在适当条件下,对非凸函数拥有全局收敛性.我们还给出数值检验结果.  相似文献   

4.
景书杰  于俊霞 《数学杂志》2015,35(1):131-134
本文对于无约束最优化问题提出了一个新的BFGS信赖域算法.利用BFGS方法和信赖域方法,提出了改进的BFGS信赖域方法.推广了文献[3,5]中的两种算法,得到一个新的BFGS信赖域算法,在适当条件下证明了算法的全局收敛性.  相似文献   

5.
郭洁  万中 《计算数学》2022,44(3):324-338
基于指数罚函数,对最近提出的一种求解无约束优化问题的三项共轭梯度法进行了修正,并用它求解更复杂的大规模极大极小值问题.证明了该方法生成的搜索方向对每一个光滑子问题是充分下降方向,而且与所用的线搜索规则无关.以此为基础,设计了求解大规模极大极小值问题的算法,并在合理的假设下,证明了算法的全局收敛性.数值实验表明,该算法优于文献中已有的类似算法.  相似文献   

6.
梯度法因为其迭代形式简单、所需存储量小,在大规模无约束优化问题中得到了广泛的应用.基于修正的二次近似模型,利用修正的BFGS公式,提出了一个新的近似最优步长.用两个著名的BB步长对此步长进行截断,让其保持在两个BB步长之间.在适当的假设条件下,证明了该方法的全局收敛性.数值实验表明,方法优于一些现有的梯度法.  相似文献   

7.
黄海 《经济数学》2011,28(2):25-28
在修正PRP共轭梯度法的基础上,提出了求解无约束优化问题的一个充分下降共轭梯度算法,证明了算法在Wolfe线搜索下全局收敛,并用数值实验表明该算法具有较好的数值结果.  相似文献   

8.
借鉴无约束优化问题的BFGS信赖域算法,建立了非线性一般约束优化问题的BFGS信赖域算法,并证明了算法的全局收敛性.数值实验表明,算法是有效的.  相似文献   

9.
MBFGS修正在SQP算法中的应用—算法及其局部收敛性   总被引:1,自引:0,他引:1  
本研究了SQP算法中保持矩阵正定性的方法.利用Li—Fukmshima提出的求解无约束问题的修正BFGS(MBFGS)公式,提出了求解等式约束问题的SQP算法.证明了若在问题的解处二阶充分条件成立,则相应的SQP算法具有2一一步超线性收敛性.  相似文献   

10.
陈忠 《数学杂志》2003,23(1):54-56
郑权等在[1]-[3]中提出了一种求解无约束优化问题的均值算法,若假设目标函数f(x)是连续的,还讨论了均值算法的收敛性。若假设f(x) 有界闭集Ω上的凸函数,本文证明了求解凸函数极小值的均值算法是线性收敛的。  相似文献   

11.
A modified BFGS algorithm for solving the unconstrained optimization, whose Hessian matrix at the minimum point of the convex function is of rank defects, is presented in this paper.The main idea of the algorithm is first to add a modified term to the convex function for obtain an equivalent model, then simply the model to get the modified BFGS algorithm. The superlinear convergence property of the algorithm is proved in this paper. To compared with the Tensor algorithms presented by R. B. Schnabel (seing [4],[5]), this method is more efficient for solving singular unconstrained optimization in computing amount and complication.  相似文献   

12.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

13.
In this paper, a new nonmonotone BFGS algorithmfor unconstrained optimization is introduced. Under mild conditions,the global convergence of this new algorithm on convex functions isproved. Some numerical experiments show that this new nonmonotoneBFGS algorithm is competitive to the BFGS algorithm.  相似文献   

14.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

15.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   

16.
In this paper, a new nonmonotone MBFGS algorithm for unconstrained optimization will be proposed. Under some suitable assumptions, the global and superlinear convergence of the new nonmonotone MBFGS algorithm on convex objective functions will be established. Some numerical experiments show that this new nonmonotone MBFGS algorithm is competitive to the MBFGS algorithm and the nonmonotone BFGS algorithm.  相似文献   

17.
Convergence analysis of a modified BFGS method on convex minimizations   总被引:2,自引:0,他引:2  
A modified BFGS method is proposed for unconstrained optimization. The global convergence and the superlinear convergence of the convex functions are established under suitable assumptions. Numerical results show that this method is interesting.  相似文献   

18.
Global convergence is proved for a partitioned BFGS algorithm, when applied on a partially separable problem with a convex decomposition. This case convers a known practical optimization method for large dimensional unconstrained problems. Inexact solution of the linear system defining the search direction and variants of the steplength rule are also shown to be acceptable without affecting the global convergence properties.  相似文献   

19.
In this paper, an adaptive trust region algorithm that uses Moreau–Yosida regularization is proposed for solving nonsmooth unconstrained optimization problems. The proposed algorithm combines a modified secant equation with the BFGS update formula and an adaptive trust region radius, and the new trust region radius utilizes not only the function information but also the gradient information. The global convergence and the local superlinear convergence of the proposed algorithm are proven under suitable conditions. Finally, the preliminary results from comparing the proposed algorithm with some existing algorithms using numerical experiments reveal that the proposed algorithm is quite promising for solving nonsmooth unconstrained optimization problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号