共查询到18条相似文献,搜索用时 78 毫秒
1.
非凸无约束优化问题的广义拟牛顿法的全局收敛性 总被引:3,自引:0,他引:3
本文对无约束优化问题提出一类新的广义拟牛顿法,并采用一类非精确线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性. 相似文献
2.
广义拟牛顿算法对一般目标函数的收敛性 总被引:2,自引:0,他引:2
本文证明了求解无约束最优化的广义拟牛顿算法在Goldstein非精确线搜索下对一般目标函数的全局收敛性,并在一定条件下证明了算法的局部超线性收敛性。 相似文献
3.
4.
牛潇萌 《数学的实践与认识》2016,(6):240-247
给出求解p_0函数非线性互补问题光滑化拟牛顿算法,在p_0函数非线性互补问题有非空有界解集且F'是Lipschitz连续的条件下,证明了算法的全局收敛性.全局收敛性的主要特征是不需要提前假设水平集是有界的. 相似文献
5.
对于非光滑的极小化问题,C.Lemaréchel在[1]中对凸函数的无约束极小化问题提示了一个高阶σ-牛顿型算法的思想,并讨论了某些性质。本文对[1]的高阶σ-牛顿型算法作了进一步研究,并提出一个概念性算法,证明了算法的全局收敛性。 相似文献
6.
7.
《数学进展》2016,(2)
利用改进函数将非光滑凸约束优化问题转化成无约束优化问题,构造了一个具有迫近形式的不可行拟牛顿束算法.值得注意的是,随着每次迭代的进行,该算法的无约束优化子问题的目标函数可能发生改变(取零步目标函数不改变,取下降步则更新目标函数),为此必须做必要的调整以保证算法的收敛性.本文主要采用了Sagastizabal和So1odov的不可行束方法的思想,在每个迭代点不一定是原始可行的情况下,得出了算法产生序列的每一个聚点是原问题最优解的收敛性结果.进一步,本文针对目标函数强凸情况下的BFGS拟牛顿算法,得到了全局收敛结果中保证拟牛顿矩阵有界的条件以及迭代序列的R-线性收敛结果. 相似文献
8.
9.
本文研究求解含有奇异解的无约束最优化问题算法 .该类问题的一个重要特性是目标函数的Hessian阵可能处处奇异 .我们提出求解该类问题的一种梯度 -正则化牛顿型混合算法 .并在一定的条件下得到了算法的全局收敛性 .而且 ,经一定迭代步后 ,算法还原为正则化 Newton法 .因而 ,算法具有局部二次收敛性 . 相似文献
10.
非拟牛顿非凸族的收敛性 总被引:11,自引:0,他引:11
1.引言 对于无约束最优化问题拟牛顿法是目前最成熟,应用最广泛的解法之一.近二十多年来,对拟牛顿法收敛性质的研究一直是非线性最优化算法理论研究的热点.带非精确搜索的拟牛顿算法的研究是从1976年 Powell[1]开始,他证明了带 Wolfe搜索 BFGS算法的全局收敛性和超线性收敛性. 1978年 Byrd, Nocedal; Ya-Xiang Yuan[3]成功地将 Powell的结果推广到限制的 Brosden凸族. 1989年, Nocedal[4]在目标函数一致凸的条件下,证明了带回追搜索的BFG… 相似文献
11.
Although the Liu–Storey (LS) nonlinear conjugate gradient method has a similar structure as the well-known Polak–Ribière–Polyak (PRP) and Hestenes–Stiefel (HS) methods, research about this method is very rare. In this paper, based on the memoryless BFGS quasi-Newton method, we propose a new LS type method, which converges globally for general functions with the Grippo–Lucidi line search. Moreover, we modify this new LS method such that the modified scheme is globally convergent for nonconvex minimization if the strong Wolfe line search is used. Numerical results are also reported. 相似文献
12.
1. IntroductionWe are concerned with the following variational inequality problem of finding amx E X such thatwhere f: R" - R" is assumed to be a continuously differentiable function, and X g R"is specified bywhere gi: R" -- R and h,-: R" - R are twice continuously differentiable functions.The variational inequality (1.1) is denoted by VI(X, f). An important special case ofVI(X, f) is the so--called nonlinear complementarity problem (NCP(f)) with X ~ R7 {x E R" I x 2 0}. Variational… 相似文献
13.
In this paper, we propose a new regularized quasi-Newton method for unconstrained optimization. At each iteration, a regularized quasi-Newton equation is solved to obtain the search direction. The step size is determined by a non-monotone Armijo backtracking line search. An adaptive regularized parameter, which is updated according to the step size of the line search, is employed to compute the next search direction. The presented method is proved to be globally convergent. Numerical experiments show that the proposed method is effective for unconstrained optimizations and outperforms the existing regularized Newton method. 相似文献
14.
Zhen-Jun Shi 《Journal of Mathematical Analysis and Applications》2006,315(1):120-131
Quasi-Newton method is a well-known effective method for solving optimization problems. Since it is a line search method, which needs a line search procedure after determining a search direction at each iteration, we must decide a line search rule to choose a step size along a search direction. In this paper, we propose a new inexact line search rule for quasi-Newton method and establish some global convergent results of this method. These results are useful in designing new quasi-Newton methods. Moreover, we analyze the convergence rate of quasi-Newton method with the new line search rule. 相似文献
15.
一类非单调修正PRP算法的全局收敛性 总被引:1,自引:0,他引:1
本文给出一类非单调线性搜索下的修正PRP算法,该方法保证每次迭代中的搜索方向是充分下降的.在较弱的条件下,我们证明了此类非单调修正PRP算法具有全局收敛性. 相似文献
16.
In this paper, we propose a general smoothing Broyden-like quasi-Newton method for solving a class of nonsmooth equations. Under appropriate conditions, the proposed method converges to a solution of the equation globally and superlinearly. In particular, the proposed method provides the possibility of developing a quasi-Newton method that enjoys superlinear convergence even if strict complementarity fails to hold. We pay particular attention to semismooth equations arising from nonlinear complementarity problems, mixed complementarity problems and variational inequality problems. We show that under certain conditions, the related methods based on the perturbed Fischer–Burmeister function, Chen–Harker–Kanzow–Smale smoothing function and the Gabriel–Moré class of smoothing functions converge globally and superlinearly. 相似文献
17.
Oleg Burdakov Lujin Gong Spartak Zikrin Ya-xiang Yuan 《Mathematical Programming Computation》2017,9(1):101-134
Limited-memory quasi-Newton methods and trust-region methods represent two efficient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited-memory methods are usually combined with a line search. We show how to efficiently combine limited-memory and trust-region techniques. One of our approaches is based on the eigenvalue decomposition of the limited-memory quasi-Newton approximation of the Hessian matrix. The decomposition allows for finding a nearly-exact solution to the trust-region subproblem defined by the Euclidean norm with an insignificant computational overhead as compared with the cost of computing the quasi-Newton direction in line-search limited-memory methods. The other approach is based on two new eigenvalue-based norms. The advantage of the new norms is that the trust-region subproblem is separable and each of the smaller subproblems is easy to solve. We show that our eigenvalue-based limited-memory trust-region methods are globally convergent. Moreover, we propose improved versions of the existing limited-memory trust-region algorithms. The presented results of numerical experiments demonstrate the efficiency of our approach which is competitive with line-search versions of the L-BFGS method. 相似文献
18.
Notes on the Dai-Yuan-Yuan modified spectral gradient method 总被引:1,自引:0,他引:1
In this paper, we give some notes on the two modified spectral gradient methods which were developed in [10]. These notes present the relationship between their stepsize formulae and some new secant equations in the quasi-Newton method. In particular, we also introduce another two new choices of stepsize. By using an efficient nonmonotone line search technique, we propose some new spectral gradient methods. Under some mild conditions, we show that these proposed methods are globally convergent. Numerical experiments on a large number of test problems from the CUTEr library are also reported, which show that the efficiency of these proposed methods. 相似文献