首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Quasi-Newton Methods for Unconstrained Optimization   总被引:3,自引:0,他引:3  
A revised algorithm is given for unconstrained optimizationusing quasi-Newton methods. The method is based on recurringthe factorization of an approximation to the Hessian matrix.Knowledge of this factorization allows greater flexibility whenchoosing the direction of search while minimizing the adverseeffects of rounding error. The control of rounding error isparticularly important when analytical derivatives are unavailable,and a modification of the algorithm to accept finite-differenceapproximations to the derivatives is given.  相似文献   

2.
对一般目标函数极小化问题的拟牛顿法及其全局收敛性的研究,已经成为拟牛顿法理论中最基本的开问题之一.本文对这个问题做了进一步的研究,对无约束优化问题提出一类新的广义拟牛顿算法,并结合Goldstein线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

3.
New Quasi-Newton Equation and Related Methods for Unconstrained Optimization   总被引:10,自引:0,他引:10  
In unconstrained optimization, the usual quasi-Newton equation is B k+1 s k=y k, where y k is the difference of the gradients at the last two iterates. In this paper, we propose a new quasi-Newton equation, , in which is based on both the function values and gradients at the last two iterates. The new equation is superior to the old equation in the sense that better approximates 2 f(x k+1)s k than y k. Modified quasi-Newton methods based on the new quasi-Newton equation are locally and superlinearly convergent. Extensive numerical experiments have been conducted which show that the new quasi-Newton methods are encouraging.  相似文献   

4.
The authors have derived what they termed quasi-Newton multi step methods in [2]. These methods have demonstrated substantial numerical improvements over the standard single step Secant-based BFGS. Such methods use a variant of the Secant equation that the updated Hessian (or its inverse) satisfies at each iteration. In this paper, new methods will be explored for which the updated Hessians satisfy multiple relations of the Secant-type. A rational model is employed in developing the new methods. The model hosts a free parameter which is exploited in enforcing symmetry on the updated Hessian approximation matrix thus obtained. The numerical performance of such techniques is then investigated and compared to other methods. Our results are encouraging and the improvements incurred supercede those obtained from other existing methods at minimal extra storage and computational overhead.  相似文献   

5.
A Modified BFGS Algorithm for Unconstrained Optimization   总被引:7,自引:0,他引:7  
In this paper we present a modified BFGS algorithm for unconstrainedoptimization. The BFGS algorithm updates an approximate Hessianwhich satisfies the most recent quasi-Newton equation. The quasi-Newtoncondition can be interpreted as the interpolation conditionthat the gradient value of the local quadratic model matchesthat of the objective function at the previous iterate. Ourmodified algorithm requires that the function value is matched,instead of the gradient value, at the previous iterate. Themodified algorithm preserves the global and local superlinearconvergence properties of the BFGS algorithm. Numerical resultsare presented, which suggest that a slight improvement has beenachieved.  相似文献   

6.
利用前一步得到的曲率信息代替xk到xk+1段二次模型的曲率给出一个具有和BFGS类似的收敛性质的类BFGS算法,并揭示新算法与自调比拟牛顿法的关系.从试验函数库CUTE中选择标准试验函数,对比标准BFGS算法及其它改进BFGS算法进行数值试验.试验结果表明这个新算法的表现有点象自调比拟牛顿算法.  相似文献   

7.
A quasi-Newton method for unconstrained function minimizationis described which requires very little additional programmingto that required for the memory gradient method of Miele &Cantrell and which usually converges in far fewer iterations.The new method is essentially that of Fletcher & Powellwith memory; computational experience shows that it can be moreefficient than the method of Fletcher & Powell.  相似文献   

8.
本文通过结合牛顿法与PRP共轭梯度法提出一修正PRP方法,新方法中包含了二阶导数信息,在适当的假设下算法全局收敛,数值算例表明了算法的有效性.  相似文献   

9.
非凸无约束优化问题的广义拟牛顿法的全局收敛性   总被引:3,自引:0,他引:3  
陈兰平  焦宝聪 《应用数学》2005,18(4):573-579
本文对无约束优化问题提出一类新的广义拟牛顿法,并采用一类非精确线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

10.
Quasi-Newton equations play a central role in quasi-Newton methods for optimization and various quasi-Newton equations are available. This paper gives a survey on these quasi-Newton equations and studies properties of quasi-Newton methods with updates satisfying different quasi-Newton equations. These include single-step quasi-Newton equations that use only gradient information and that use both gradient and function value information in one step, and multi-step quasi-Newton equations that use the gradient information in last m steps. Main properties of quasi-Newton methods with updates satisfying different quasi-Newton equations are studied. These properties include the finite termination property, invariance, heredity of positive definite updates, consistency of search directions, global convergence and local superlinear convergence properties.  相似文献   

11.
12.
In this note, we aim at improving the proof of Theorem 2.1, 2.2, and Theorem 4.2 in Andrei (J Optim Theory Appl 141:249–264, 2009).  相似文献   

13.
《Optimization》2012,61(5):731-758
In this article, the convergence properties of the DFP algorithm with inexact line searches on uniformly convex functions are investigated. An inexact line search is proposed and the global convergence and superlinear convergence of the DFP algorithm with this line search on uniformly convex functions are proved.  相似文献   

14.
We present a numerical implementation of the parallel gradient distribution (PGD) method for the solution of large-scale unconstrained optimization problems. The proposed parallel algorithm is characterized by a parallel phase which exploits the portions of the gradient of the objective function assigned to each processor; then, a coordination phase follows which, by a synchronous interaction scheme, optimizes over the partial results obtained by the parallel phase. The parallel and coordination phases are implemented using a quasi-Newton limited-memory BFGS approach. The computational experiments, carried out on a network of UNIX workstations by using the parallel software tool PVM, show that parallelization efficiency was problem dependent and ranged between 0.15 and 8.75. For the 150 problems solved by PGD on more than one processor, 85 cases had parallelization efficiency below 1, while 65 cases had a parallelization efficiency above 1.  相似文献   

15.
16.
无约束最优化锥模型拟牛顿信赖域方法的收敛性(英)   总被引:3,自引:0,他引:3  
本文研究无约束最优化雄模型拟牛顿信赖域方法的全局收敛性.文章给出了确保这类方法全局收敛的条件.文章还证明了,当用拆线法来求这类算法中锥模型信赖域子问题的近似解时,确保全局收敛的条件得到满足  相似文献   

17.
带有固定步长的非单调自适应信赖域算法   总被引:1,自引:0,他引:1  
提出了求解无约束优化问题带有固定步长的非单调自适应信赖域算法.信赖域半径的修正采用自适应技术,算法在试探步不被接受时,采用固定步长寻找下一迭代点.并在适当的条件下,证明算法具有全局收敛性和超线性收敛性.初步的数值试验表明算法对高维问题具有较好的效果.  相似文献   

18.
结合非单调信赖域方法,和非单调线搜索技术,提出了一种新的无约束优化算法.信赖域方法的每一步采用线搜索,使得迭代每一步都充分下降加快了迭代速度.在一定条件下,证明了算法具有全局收敛性和局部超线性.收敛速度.数值试验表明算法是十分有效的.  相似文献   

19.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

20.
In this paper, an improved spectral conjugate gradient algorithm is developed for solving nonconvex unconstrained optimization problems. Different from the existent methods, the spectral and conjugate parameters are chosen such that the obtained search direction is always sufficiently descent as well as being close to the quasi-Newton direction. With these suitable choices, the additional assumption in the method proposed by Andrei on the boundedness of the spectral parameter is removed. Under some mild conditions, global convergence is established. Numerical experiments are employed to demonstrate the efficiency of the algorithm for solving large-scale benchmark test problems, particularly in comparison with the existent state-of-the-art algorithms available in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号