首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
一类带非单调线搜索的信赖域算法   总被引:1,自引:0,他引:1  
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某...  相似文献   

2.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   

3.
In this paper, a new nonmonotone inexact line search rule is proposed and applied to the trust region method for unconstrained optimization problems. In our line search rule, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of the current objective function value . We can obtain a larger stepsize in each line search procedure and possess nonmonotonicity when incorporating the nonmonotone term into the trust region method. Unlike the traditional trust region method, the algorithm avoids resolving the subproblem if a trial step is not accepted. Under suitable conditions, global convergence is established. Numerical results show that the new method is effective for solving unconstrained optimization problems.  相似文献   

4.
Analysis of a self-scaling quasi-Newton method   总被引:1,自引:0,他引:1  
We study the self-scaling BFGS method of Oren and Luenberger (1974) for solving unconstrained optimization problems. For general convex functions, we prove that the method is globally convergent with inexact line searches. We also show that the directions generated by the self-scaling BFGS method approach Newton's direction asymptotically. This would ensure superlinear convergence if, in addition, the search directions were well-scaled, but we show that this is not always the case. We find that the method has a major drawback: to achieve superlinear convergence it may be necessary to evaluate the function twice per iteration, even very near the solution. An example is constructed to show that the step-sizes required to achieve a superlinear rate converge to 2 and 0.5 alternately.This work was supported by National Science Foundation Grant CCR-9101359, and by the Department of Energy Grant DE-FG02-87ER25047.This work was performed while the author was visiting Northwestern University.  相似文献   

5.
In this paper, a new nonmonotone MBFGS algorithm for unconstrained optimization will be proposed. Under some suitable assumptions, the global and superlinear convergence of the new nonmonotone MBFGS algorithm on convex objective functions will be established. Some numerical experiments show that this new nonmonotone MBFGS algorithm is competitive to the MBFGS algorithm and the nonmonotone BFGS algorithm.  相似文献   

6.
The limited memory BFGS method (L-BFGS) is an adaptation of the BFGS method for large-scale unconstrained optimization. However, The L-BFGS method need not converge for nonconvex objective functions and it is inefficient on highly ill-conditioned problems. In this paper, we proposed a regularization strategy on the L-BFGS method, where the used regularization parameter may play a compensation role in some sense when the condition number of Hessian approximation tends to become ill-conditioned. Then we proposed a regularized L-BFGS method and established its global convergence even when the objective function is nonconvex. Numerical results show that the proposed method is efficient.  相似文献   

7.
宇和濮在文[Yu Z S,Pu D G.A new nonmonotone line search technique for unconstrained optimization[J].J Comput Appl Math,2008,219:134-144]中提出了一种非单调的线搜索算法解无约束优化问题.和他们的工作不同,当优化问题非凸时,本文给出了一种非单调滤子曲率线搜索算法.通过使用海森矩阵的负曲率信息,算法产生的迭代序列被证明收敛于一个满足二阶充分性条件的点.在不需要假设极限点存在的情况下,证明了算法具有整体收敛性,而且分析了该算法的收敛速率.数值试验表明算法的有效性.  相似文献   

8.
在二阶拟牛顿方程的基础上,结合Zhang H.C.提出的非单调线搜索构造了一种求解大规模无约束优化问题的对角二阶拟牛顿算法.算法在每次迭代中利用对角矩阵逼近Hessian矩阵的逆,使计算搜索方向的存储量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性和超线性收敛性.数值实验表明算法是有效可行的.  相似文献   

9.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

10.
In this paper, a new nonmonotone BFGS algorithmfor unconstrained optimization is introduced. Under mild conditions,the global convergence of this new algorithm on convex functions isproved. Some numerical experiments show that this new nonmonotoneBFGS algorithm is competitive to the BFGS algorithm.  相似文献   

11.
On the Nonmonotone Line Search   总被引:10,自引:0,他引:10  
The technique of nonmonotone line search has received many successful applications and extensions in nonlinear optimization. This paper provides some basic analyses of the nonmonotone line search. Specifically, we analyze the nonmonotone line search methods for general nonconvex functions along different lines. The analyses are helpful in establishing the global convergence of a nonmonotone line search method under weaker conditions on the search direction. We explore also the relations between nonmonotone line search and R-linear convergence assuming that the objective function is uniformly convex. In addition, by taking the inexact Newton method as an example, we observe a numerical drawback of the original nonmonotone line search and suggest a standard Armijo line search when the nonmonotone line search condition is not satisfied by the prior trial steplength. The numerical results show the usefulness of such suggestion for the inexact Newton method.  相似文献   

12.
A new nonmonotone algorithm is proposed and analyzed for unconstrained nonlinear optimization. The nonmonotone techniques applied in this algorithm are based on the estimate sequence proposed by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, 2004) for convex optimization. Under proper assumptions, global convergence of this algorithm is established for minimizing general nonlinear objective function with Lipschitz continuous derivatives. For convex objective function, this algorithm maintains the optimal convergence rate of convex optimization. In numerical experiments, this algorithm is specified by employing safe-guarded nonlinear conjugate gradient search directions. Numerical results show the nonmonotone algorithm performs significantly better than the corresponding monotone algorithm for solving the unconstrained optimization problems in the CUTEr (Bongartz et al. in ACM Trans. Math. Softw. 21:123–160, 1995) library.  相似文献   

13.
In this paper we state some nonmonotone line search strategies for unconstrained optimization algorithms. Abstracting from the concrete line search strategy we prove two general convergence results. Using this theory we can show the global convergence of the BFGS method with nonmonotone line search strategy. In contrast to some former results about nonmonotone line search strategies, both our convergence results and their proofs are natural generalizations of known results for the monotone case.  相似文献   

14.
本文提出了一种解无约束优化问题的新的非单调自适应信赖域方法.这种方法借助于目标函数的海赛矩阵的近似数量矩阵来确定信赖域半径.在通常的条件下,给出了新算法的全局收敛性以及局部超线性收敛的结果,数值试验验证了新的非单调方法的有效性.  相似文献   

15.
This paper concerns with a new nonmonotone strategy and its application to the line search approach for unconstrained optimization. It has been believed that nonmonotone techniques can improve the possibility of finding the global optimum and increase the convergence rate of the algorithms. We first introduce a new nonmonotone strategy which includes a convex combination of the maximum function value of some preceding successful iterates and the current function value. We then incorporate the proposed nonmonotone strategy into an inexact Armijo-type line search approach to construct a more relaxed line search procedure. The global convergence to first-order stationary points is subsequently proved and the R-linear convergence rate are established under suitable assumptions. Preliminary numerical results finally show the efficiency and the robustness of the proposed approach for solving unconstrained nonlinear optimization problems.  相似文献   

16.
We study the convergence properties of reduced Hessian successive quadratic programming for equality constrained optimization. The method uses a backtracking line search, and updates an approximation to the reduced Hessian of the Lagrangian by means of the BFGS formula. Two merit functions are considered for the line search: the 1 function and the Fletcher exact penalty function. We give conditions under which local and superlinear convergence is obtained, and also prove a global convergence result. The analysis allows the initial reduced Hessian approximation to be any positive definite matrix, and does not assume that the iterates converge, or that the matrices are bounded. The effects of a second order correction step, a watchdog procedure and of the choice of null space basis are considered. This work can be seen as an extension to reduced Hessian methods of the well known results of Powell (1976) for unconstrained optimization.This author was supported, in part, by National Science Foundation grant CCR-8702403, Air Force Office of Scientific Research grant AFOSR-85-0251, and Army Research Office contract DAAL03-88-K-0086.This author was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contracts W-31-109-Eng-38 and DE FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071.  相似文献   

17.
A new adaptive scaled Broyden-Fletcher-Goldfarb-Shanno (BFGS) method for unconstrained optimization is presented. The third term in the standard BFGS update formula is scaled in order to reduce the large eigenvalues of the approximation to the Hessian of the minimizing function. Under the inexact Wolfe line search conditions, the global convergence of the adaptive scaled BFGS method is proved in very general conditions without assuming the convexity of the minimizing function. Using 80 unconstrained optimization test functions with a medium number of variables, the preliminary numerical experiments show that this variant of the scaled BFGS method is more efficient than the standard BFGS update or than some other scaled BFGS methods.  相似文献   

18.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

19.
万中  冯冬冬 《计算数学》2011,33(4):387-396
基于非单调线搜索在寻求优化问题最优解中的优越性,提出了一类新的非单调保守BFGS算法.同已有方法不同,该算法中用来控制非单调性程度的算法参数不是取固定值,而是利用已有目标函数和梯度函数的信息自动调整其取值,以改善算法的数值表现.在合适的假设条件下,建立了新的非单调保守BFGS算法的全局收敛性.用基准测试优化问题测试了算...  相似文献   

20.
In this work, we present a new hybrid conjugate gradient method based on the approach of the convex hybridization of the conjugate gradient update parameters of DY and HS+, adapting a quasi-Newton philosophy. The computation of the hybrization parameter is obtained by minimizing the distance between the hybrid conjugate gradient direction and the self-scaling memoryless BFGS direction. Furthermore, a significant property of our proposed method is that it ensures sufficient descent independent of the accuracy of the line search. The global convergence of the proposed method is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments on a set of unconstrained optimization test problems from the CUTEr collection indicate that our proposed method is preferable and in general superior to classic conjugate gradient methods in terms of efficiency and robustness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号