首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
BFGS校正拟牛顿法解决大规模信号恢复问题   总被引:1,自引:0,他引:1       下载免费PDF全文
陈凤华  李双安 《数学杂志》2015,35(3):727-734
本文采用BFGS校正拟牛顿法研究了大规模信号恢复问题min{u 1:Au=b},这个问题通常被转化为1正则化最小二乘问题.利用Nesterov光滑化技术对u 1进行光滑化处理,原问题被转化为无约束光滑凸规划问题,最后获得了较好的数值实验结果,实验结果表明用BFGS校正拟牛顿法解决大规模信号恢复问题是可行的.  相似文献   

2.
牛顿与二阶拟牛顿混合迭代方法   总被引:4,自引:0,他引:4  
这里F:R~n→R~n是充分光滑的向量函数。用逐步迭代法求解(1.1)的主要方法有牛顿法或拟牛顿法。用二步或多步混合迭代方法求解(1.1),这种方法以前未曾见过。 1984年Pan提出二阶拟牛顿法,我们用1981年More,Garbow and Hillstrom提出的非线性方程组的测试函数对二阶拟牛顿方法进行测试,发现二阶拟牛顿法对于初始点及初始近似Jacobi阵的反应非常灵敏,若用牛顿法与二阶拟牛顿方法交替使用,其迭代过程所  相似文献   

3.
黄翔 《运筹学学报》2005,9(4):74-80
近年来,决定椭圆型方程系数反问题在地磁、地球物理、冶金和生物等实际问题上有着广泛的应用.本文讨论了二维的决定椭圆型方程系数反问题的数值求解方法.由误差平方和最小原则,这个反问题可化为一个变分问题,并进一步离散化为一个最优化问题,其目标函数依赖于要决定的方程系数.本文着重考察非线性共轭梯度法在此最优化问题数值计算中的表现,并与拟牛顿法作为对比.为了提高算法的效率我们适当选择加快收敛速度的预处理矩阵.同时还考察了线搜索方法的不同对优化算法的影响.数值实验的结果表明,非线性共轭梯度法在这类大规模优化问题中相对于拟牛顿法更有效.  相似文献   

4.
高岩 《运筹学学报》2011,15(2):53-58
研究了非光滑的非线性互补问题. 首先将非光滑的非线性互补问题转化为一个非光滑方程组,然后用牛顿法求解这个非光滑方程组. 在该牛顿法中,每次迭代只需一个原始函数B-微分中的一个元素. 最后证明了该牛顿法的超线性收敛性.  相似文献   

5.
针对粒子群算法局部搜索能力差,后期收敛速度慢等缺点,提出了一种改进的粒子群算法,该算法是在粒子群算法后期加入拟牛顿方法,充分发挥了粒子群算法的全局搜索性和拟牛顿法的局部精细搜索性,从而克服了粒子群算法的不足,把超越方程转化为函数优化的问题,利用该算法求解,数值实验结果表明,算法有较高的收敛速度和求解精度。  相似文献   

6.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

7.
牛顿法是求解非线性方程(组)的一种经典方法,本文在Banach空间中对经典牛顿法加以了改进,研究了其收敛性,改进后的牛顿法具有更广泛的应用前景.  相似文献   

8.
1 引言 考虑无约束优化问题 minf(x),(1.1) x∈R~n其中f为非线性町微函数。 对于中小规模的无约束优化问题,拟牛顿法(如BFGS方法)是十分有效的。但对于大规模问题,即n相当大时,算法所需存贮相当重要,并且在每次迭代中线代数计算量也影响算法的效率。 有限存贮((1imited memory)拟牛顿法可看成是共轭梯度法的推广。这一类方法最早由Perry和Shanno提出,此后有不少人进行研究,如Gill和Murray,Buckley,Buckley和LeNir及Nocedal。 有限存贮BFGS方法由Nocedal提出,是目前一种十分有效的有限存贮拟牛顿方法,其基本出法点是减少存贮。由于BFGS修正公式可写成  相似文献   

9.
牛顿法是求解非线性方程F(x)=0的一种经典方法。在一般假设条件下,牛顿法只具有局部收敛性。本文证明了一维凸函数牛顿法的全局收敛性,并且给出了它在全局优化积分水平集方法中的应用。  相似文献   

10.
本文研究了求解加权线性互补问题的光滑牛顿法.利用一类光滑函数将加权线性互补问题等价转化成一个光滑方程组,然后提出一个新的光滑牛顿法去求解它.在适当条件下,证明了算法具有全局和局部二次收敛性质.与现有的光滑牛顿法不同,我们的算法采用一个非单调无导数线搜索技术去产生步长,从而具有更好的收敛性质和实际计算效果.  相似文献   

11.
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell–Hestenes–Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.  相似文献   

12.
A modified version of the truncated-Newton algorithm of Nash ([24], [25], [29]) is presented differing from it only in the use of an exact Hessian vector product for carrying out the large-scale unconstrained optimization required in variational data assimilation. The exact Hessian vector products is obtained by solving an optimal control problem of distributed parameters. (i.e. the system under study occupies a certain spatial and temporal domain and is modeled by partial differential equations) The algorithm is referred to as the adjoint truncated-Newton algorithm. The adjoint truncated-Newton algorithm is based on the first and the second order adjoint techniques allowing to obtain a better approximation to the Newton line search direction for the problem tested here. The adjoint truncated-Newton algorithm is applied here to a limited-area shallow water equations model with model generated data where the initial conditions serve as control variables. We compare the performance of the adjoint truncated-Newton algorithm with that of the original truncated-Newton method [29] and the LBFGS (Limited Memory BFGS) method of Liu and Nocedal [23]. Our numerical tests yield results which are twice as fast as these obtained by the truncated-Newton algorithm and are faster than the LBFGS method both in terms of number of iterations as well as in terms of CPU time.  相似文献   

13.
本文对等式约束问题提出了一个种组合信赖域与拟牛顿算法。该算法的特点是若Lagrangian函数的近似Hessian阵在等式约束Jacobi阵的零空间正定的,则选择拟牛顿算法,否则用信赖域算法,在通常信赖域算法的收敛假设下,该文证明了组合算法的全局收敛性。  相似文献   

14.
Quasi-Newton method is a well-known effective method for solving optimization problems. Since it is a line search method, which needs a line search procedure after determining a search direction at each iteration, we must decide a line search rule to choose a step size along a search direction. In this paper, we propose a new inexact line search rule for quasi-Newton method and establish some global convergent results of this method. These results are useful in designing new quasi-Newton methods. Moreover, we analyze the convergence rate of quasi-Newton method with the new line search rule.  相似文献   

15.
Two approaches to quasi-Newton methods for constrained optimization problems inR n are presented. These approaches are based on a class of Lagrange multiplier approximation formulas used by the author in his previous work on Newton's method for constrained problems. The first approach is set in the framework of a diagonalized multiplier method. From this point of view, a new update rule for the Lagrange multipliers which depends on the particular quasi-Newton method employed is given. This update rule, in contrast to most other update rules, does not require exact minimization of the intermediate unconstrained problem. In fact, the optimal convergence rate is attained in the extreme case when only one step of a quasi-Newton method is taken on this intermediate problem. The second approach transforms the constrained optimization problem into an unconstrained problem of the same dimension.The author would like to thank J. Moré and M. J. D. Powell for comments related to the material in Section 13. He also thanks J. Nocedal for the computer results in Tables 1–3 and M. Wright for the results in Table 4, which were obtained via one of her general programs. Discussions with M. R. Hestenes and A. Miele regarding their contributions to this area were very helpful. Many individuals, including J. E. Dennis, made useful general comments at various stages of this paper. Finally, the author is particularly thankful to R. Byrd, M. Heath, and R. McCord for reading the paper in detail and suggesting many improvements.This work was supported by the Energy Research and Development Administration, Contract No. E-(40-1)-5046, and was performed in part while the author was visiting the Department of Operations Research, Stanford University, Stanford, California.  相似文献   

16.
On the limited memory BFGS method for large scale optimization   总被引:60,自引:0,他引:60  
We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.This work was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contract DE-FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071.  相似文献   

17.
We show that strong differentiability at solutions is not necessary for superlinear convergence of quasi-Newton methods for solving nonsmooth equations. We improve the superlinear convergence result of Ip and Kyparisis for general quasi-Newton methods as well as the Broyden method. For a special example, the Newton method is divergent but the Broyden method is superlinearly convergent.  相似文献   

18.
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Newton direction, and two subspace gradient and modified gradient directions. Our algorithm can be applied to large-scale problems as there is no need to solve any subproblems. The global convergence of the method is proved and some numerical results are also given.

  相似文献   


19.
顾桂定  王德人 《计算数学》1999,21(4):417-428
1.引言实际问题中经常要遇到一族函数极小值问题的求解,即minfi(x),i=1,...,P;(1.1)其中人:R"、R具有公共的Hessian矩阵G(x)。7'fi(x),r是适中的数值.如在各种负载下的弹性体研究中,即要遇到问题(l.I)的求解,其中人(C)一人C)+qC十C;(=1,...,....对于不同的比则人(X)具有不同的极小点和不同的梯度D人(X),但具有相同的Hessian矩阵G(X).1994年,O'Leary等【']把拟一Newton算法推广至成组形式(multiPleversio...,…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号