首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 78 毫秒
1.
非凸无约束优化问题的广义拟牛顿法的全局收敛性   总被引:3,自引:0,他引:3  
陈兰平  焦宝聪 《应用数学》2005,18(4):573-579
本文对无约束优化问题提出一类新的广义拟牛顿法,并采用一类非精确线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

2.
广义拟牛顿算法对一般目标函数的收敛性   总被引:2,自引:0,他引:2  
本文证明了求解无约束最优化的广义拟牛顿算法在Goldstein非精确线搜索下对一般目标函数的全局收敛性,并在一定条件下证明了算法的局部超线性收敛性。  相似文献   

3.
应用双参数的类Broyden族校正公式,为研究求解无约束最优化问题的拟牛顿类算法对一般目标函数的收敛性这个开问题提供了一种新的方法.  相似文献   

4.
给出求解p_0函数非线性互补问题光滑化拟牛顿算法,在p_0函数非线性互补问题有非空有界解集且F'是Lipschitz连续的条件下,证明了算法的全局收敛性.全局收敛性的主要特征是不需要提前假设水平集是有界的.  相似文献   

5.
对于非光滑的极小化问题,C.Lemaréchel在[1]中对凸函数的无约束极小化问题提示了一个高阶σ-牛顿型算法的思想,并讨论了某些性质。本文对[1]的高阶σ-牛顿型算法作了进一步研究,并提出一个概念性算法,证明了算法的全局收敛性。  相似文献   

6.
本文就非拟牛顿法在无约束最优化问题上,对采用非单调线搜索的情况下是否具有全局收敛性进行了研究,在目标函数满足一致凸的条件下,证明了非拟牛顿族是全局收敛的.  相似文献   

7.
利用改进函数将非光滑凸约束优化问题转化成无约束优化问题,构造了一个具有迫近形式的不可行拟牛顿束算法.值得注意的是,随着每次迭代的进行,该算法的无约束优化子问题的目标函数可能发生改变(取零步目标函数不改变,取下降步则更新目标函数),为此必须做必要的调整以保证算法的收敛性.本文主要采用了Sagastizabal和So1odov的不可行束方法的思想,在每个迭代点不一定是原始可行的情况下,得出了算法产生序列的每一个聚点是原问题最优解的收敛性结果.进一步,本文针对目标函数强凸情况下的BFGS拟牛顿算法,得到了全局收敛结果中保证拟牛顿矩阵有界的条件以及迭代序列的R-线性收敛结果.  相似文献   

8.
将非线性不等式组的求解转化成非线性最小二乘问题,利用引入的光滑辅助函数,构造新的极小化问题来逐次逼近最小二乘问题.在一定的条件下,文中所提出的光滑高斯-牛顿算法的全局收敛性得到保证.适当条件下,算法的局部二阶收敛性得到了证明.文后的数值试验表明本文算法有效.  相似文献   

9.
许任飞 《经济数学》2004,21(3):258-262
本文研究求解含有奇异解的无约束最优化问题算法 .该类问题的一个重要特性是目标函数的Hessian阵可能处处奇异 .我们提出求解该类问题的一种梯度 -正则化牛顿型混合算法 .并在一定的条件下得到了算法的全局收敛性 .而且 ,经一定迭代步后 ,算法还原为正则化 Newton法 .因而 ,算法具有局部二次收敛性 .  相似文献   

10.
非拟牛顿非凸族的收敛性   总被引:11,自引:0,他引:11  
陈兰平  焦宝聪 《计算数学》2000,22(3):369-378
1.引言 对于无约束最优化问题拟牛顿法是目前最成熟,应用最广泛的解法之一.近二十多年来,对拟牛顿法收敛性质的研究一直是非线性最优化算法理论研究的热点.带非精确搜索的拟牛顿算法的研究是从1976年 Powell[1]开始,他证明了带 Wolfe搜索 BFGS算法的全局收敛性和超线性收敛性. 1978年 Byrd, Nocedal; Ya-Xiang Yuan[3]成功地将 Powell的结果推广到限制的 Brosden凸族. 1989年, Nocedal[4]在目标函数一致凸的条件下,证明了带回追搜索的BFG…  相似文献   

11.
Although the Liu–Storey (LS) nonlinear conjugate gradient method has a similar structure as the well-known Polak–Ribière–Polyak (PRP) and Hestenes–Stiefel (HS) methods, research about this method is very rare. In this paper, based on the memoryless BFGS quasi-Newton method, we propose a new LS type method, which converges globally for general functions with the Grippo–Lucidi line search. Moreover, we modify this new LS method such that the modified scheme is globally convergent for nonconvex minimization if the strong Wolfe line search is used. Numerical results are also reported.  相似文献   

12.
1. IntroductionWe are concerned with the following variational inequality problem of finding amx E X such thatwhere f: R" - R" is assumed to be a continuously differentiable function, and X g R"is specified bywhere gi: R" -- R and h,-: R" - R are twice continuously differentiable functions.The variational inequality (1.1) is denoted by VI(X, f). An important special case ofVI(X, f) is the so--called nonlinear complementarity problem (NCP(f)) with X ~ R7 {x E R" I x 2 0}. Variational…  相似文献   

13.
In this paper, we propose a new regularized quasi-Newton method for unconstrained optimization. At each iteration, a regularized quasi-Newton equation is solved to obtain the search direction. The step size is determined by a non-monotone Armijo backtracking line search. An adaptive regularized parameter, which is updated according to the step size of the line search, is employed to compute the next search direction. The presented method is proved to be globally convergent. Numerical experiments show that the proposed method is effective for unconstrained optimizations and outperforms the existing regularized Newton method.  相似文献   

14.
Quasi-Newton method is a well-known effective method for solving optimization problems. Since it is a line search method, which needs a line search procedure after determining a search direction at each iteration, we must decide a line search rule to choose a step size along a search direction. In this paper, we propose a new inexact line search rule for quasi-Newton method and establish some global convergent results of this method. These results are useful in designing new quasi-Newton methods. Moreover, we analyze the convergence rate of quasi-Newton method with the new line search rule.  相似文献   

15.
一类非单调修正PRP算法的全局收敛性   总被引:1,自引:0,他引:1  
易芳 《经济数学》2006,23(1):99-103
本文给出一类非单调线性搜索下的修正PRP算法,该方法保证每次迭代中的搜索方向是充分下降的.在较弱的条件下,我们证明了此类非单调修正PRP算法具有全局收敛性.  相似文献   

16.
In this paper, we propose a general smoothing Broyden-like quasi-Newton method for solving a class of nonsmooth equations. Under appropriate conditions, the proposed method converges to a solution of the equation globally and superlinearly. In particular, the proposed method provides the possibility of developing a quasi-Newton method that enjoys superlinear convergence even if strict complementarity fails to hold. We pay particular attention to semismooth equations arising from nonlinear complementarity problems, mixed complementarity problems and variational inequality problems. We show that under certain conditions, the related methods based on the perturbed Fischer–Burmeister function, Chen–Harker–Kanzow–Smale smoothing function and the Gabriel–Moré class of smoothing functions converge globally and superlinearly.  相似文献   

17.
Limited-memory quasi-Newton methods and trust-region methods represent two efficient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited-memory methods are usually combined with a line search. We show how to efficiently combine limited-memory and trust-region techniques. One of our approaches is based on the eigenvalue decomposition of the limited-memory quasi-Newton approximation of the Hessian matrix. The decomposition allows for finding a nearly-exact solution to the trust-region subproblem defined by the Euclidean norm with an insignificant computational overhead as compared with the cost of computing the quasi-Newton direction in line-search limited-memory methods. The other approach is based on two new eigenvalue-based norms. The advantage of the new norms is that the trust-region subproblem is separable and each of the smaller subproblems is easy to solve. We show that our eigenvalue-based limited-memory trust-region methods are globally convergent. Moreover, we propose improved versions of the existing limited-memory trust-region algorithms. The presented results of numerical experiments demonstrate the efficiency of our approach which is competitive with line-search versions of the L-BFGS method.  相似文献   

18.
Notes on the Dai-Yuan-Yuan modified spectral gradient method   总被引:1,自引:0,他引:1  
In this paper, we give some notes on the two modified spectral gradient methods which were developed in [10]. These notes present the relationship between their stepsize formulae and some new secant equations in the quasi-Newton method. In particular, we also introduce another two new choices of stepsize. By using an efficient nonmonotone line search technique, we propose some new spectral gradient methods. Under some mild conditions, we show that these proposed methods are globally convergent. Numerical experiments on a large number of test problems from the CUTEr library are also reported, which show that the efficiency of these proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号