首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 46 毫秒
1.
通过对函数的泰勒展开式进行误差分析,提出了对二次模型进行改进的新模型,在此基础上得到了改进的拟牛顿条件,并得到了与其相应的Broyden-Fletcher-Goldfarb-Shanno(BFGS)算法.证明了在适当条件下该算法全局收敛.从试验函数库中选择标准测试函数,对经典的BFGS算法与改进的BFGS算法进行数值试验,试验结果表明改进的算法优于经典的BFGS算法.  相似文献   

2.
无约束最优化线搜索一般模型及BFGS方法的整体收敛性   总被引:7,自引:0,他引:7  
本文给出了无约束最优化的算法中线性搜索的可接受的步长选择律的一种一般形式,它概括了大多数已有的步长律为其特例,并且研究了它基本性质,最后证明了此线性搜索一般模拟相结合的无约束优化的BFGS算法的整体收敛性。  相似文献   

3.
针对牛顿法在求解一般非凸函数极小值过程中,迭代点处Hessian矩阵不一定正定的情况,提出了一种精细修正的牛顿法.该方法充分利用迭代点处目标函数的一阶、二阶信息,合适选取搜索方向,是最速下降法、牛顿法和已有修正牛顿法相混合的一种方法.在较弱的条件下建立了算法的全局收敛性.进一步的数值实验验证了提出的算法比以往同类算法计...  相似文献   

4.
大型稀疏无约束最优化问题的行列修正算法   总被引:3,自引:0,他引:3  
本文提出了一类适用于大型稀疏最优化问题的简单易行的行列修正算法,获得了新算法的局部超一性收敛性,大量的数值试验表明这是一个较为理想的修正算不。新算法同样可以用来求解大型对称性非线性方程组。  相似文献   

5.
非凸函数极小问题的BFGS算法   总被引:1,自引:0,他引:1  
本对于非凸函数的无约束优化问题,给出一类修正的BFGS算法。算法的思想是对非凸函数的近似Hesse矩阵进行修正,得到下降方向,并且保证拟牛顿条件成立,当步长采用线性搜索一般模型时,证明了该算法的局部收敛性。  相似文献   

6.
本文研究约束最优化锥模型拟牛顿依赖域方法的全局收敛性。文章给出了确保这类方法全局收敛的条件。  相似文献   

7.
8.
本文提出了一类新的用于解决无约束最优化问题的拟牛顿方法,并证明了这样的性质,在 精确线性搜索条件下,每一步该族所有方法所产生的迭代方向和迭代点列仅依赖于参数ρ.该方 法可视为拟牛顿方法中黄族的推广.  相似文献   

9.
10.
一类非拟Newton算法及其收敛性   总被引:14,自引:0,他引:14  
本文对求解无约束最优化问题提出一类非拟Newton算法,此方法同样具有二次终止性,产生的矩阵序列保持正定对称传递性,并证明了新类中的任何一种算法的全局收敛和超线性收敛性。  相似文献   

11.
We present a numerical implementation of the parallel gradient distribution (PGD) method for the solution of large-scale unconstrained optimization problems. The proposed parallel algorithm is characterized by a parallel phase which exploits the portions of the gradient of the objective function assigned to each processor; then, a coordination phase follows which, by a synchronous interaction scheme, optimizes over the partial results obtained by the parallel phase. The parallel and coordination phases are implemented using a quasi-Newton limited-memory BFGS approach. The computational experiments, carried out on a network of UNIX workstations by using the parallel software tool PVM, show that parallelization efficiency was problem dependent and ranged between 0.15 and 8.75. For the 150 problems solved by PGD on more than one processor, 85 cases had parallelization efficiency below 1, while 65 cases had a parallelization efficiency above 1.  相似文献   

12.
New Quasi-Newton Equation and Related Methods for Unconstrained Optimization   总被引:10,自引:0,他引:10  
In unconstrained optimization, the usual quasi-Newton equation is B k+1 s k=y k, where y k is the difference of the gradients at the last two iterates. In this paper, we propose a new quasi-Newton equation, , in which is based on both the function values and gradients at the last two iterates. The new equation is superior to the old equation in the sense that better approximates 2 f(x k+1)s k than y k. Modified quasi-Newton methods based on the new quasi-Newton equation are locally and superlinearly convergent. Extensive numerical experiments have been conducted which show that the new quasi-Newton methods are encouraging.  相似文献   

13.
本文通过结合牛顿法与PRP共轭梯度法提出一修正PRP方法,新方法中包含了二阶导数信息,在适当的假设下算法全局收敛,数值算例表明了算法的有效性.  相似文献   

14.
The paper discusses several versions of the method of shortest residuals, a specific variant of the conjugate gradient algorithm, first introduced by Lemaréchal and Wolfe and discussed by Hestenes in a quadratic case. In the paper we analyze the global convergence of the versions considered. Numerical comparison of these versions of the method of shortest residuals and an implementation of a standard Polak–Ribière conjugate gradient algorithm is also provided. It supports the claim that the method of shortest residuals is a viable technique, competitive to other conjugate gradient algorithms.  相似文献   

15.
Analysis of a self-scaling quasi-Newton method   总被引:1,自引:0,他引:1  
We study the self-scaling BFGS method of Oren and Luenberger (1974) for solving unconstrained optimization problems. For general convex functions, we prove that the method is globally convergent with inexact line searches. We also show that the directions generated by the self-scaling BFGS method approach Newton's direction asymptotically. This would ensure superlinear convergence if, in addition, the search directions were well-scaled, but we show that this is not always the case. We find that the method has a major drawback: to achieve superlinear convergence it may be necessary to evaluate the function twice per iteration, even very near the solution. An example is constructed to show that the step-sizes required to achieve a superlinear rate converge to 2 and 0.5 alternately.This work was supported by National Science Foundation Grant CCR-9101359, and by the Department of Energy Grant DE-FG02-87ER25047.This work was performed while the author was visiting Northwestern University.  相似文献   

16.
带有固定步长的非单调自适应信赖域算法   总被引:1,自引:0,他引:1  
提出了求解无约束优化问题带有固定步长的非单调自适应信赖域算法.信赖域半径的修正采用自适应技术,算法在试探步不被接受时,采用固定步长寻找下一迭代点.并在适当的条件下,证明算法具有全局收敛性和超线性收敛性.初步的数值试验表明算法对高维问题具有较好的效果.  相似文献   

17.
The BFGS method is the most effective of the quasi-Newton methods for solving unconstrained optimization problems. Wei, Li, and Qi [16] have proposed some modified BFGS methods based on the new quasi-Newton equation B k+1 s k = y* k , where y* k is the sum of y k and A k s k, and A k is some matrix. The average performance of Algorithm 4.3 in [16] is better than that of the BFGS method, but its superlinear convergence is still open. This article proves the superlinear convergence of Algorithm 4.3 under some suitable conditions.  相似文献   

18.
In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified updates are evaluated and used in the switching rule. When the condition number of the modified SR1 update is superior to the modified BFGS update, the step in the proposed quasi-Newton method is the modified SR1 step. Otherwise the step is the modified BFGS step. The efficiency of the proposed method is tested by numerical experiments on small, medium and large scale optimization. The numerical results are reported and analyzed to show the superiority of the proposed method.  相似文献   

19.
尝试在有限存储类算法中利用目标函数值所提供的信息.首先利用插值条件构造了一个新的二次函数逼近目标函数,得到了一个新的弱割线方程,然后将此弱割线方程与袁[1]的弱割线方程相结合,给出了一族包括标准LBFGS的有限存储BFGS类算法,证明了这族算法的收敛性.从标准试验函数库CUTE中选择试验函数进行了数值试验,试验结果表明这族算法的数值表现都与标准LBFGS类似.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号