首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A quasi-Newton extension of the Goldstein-Levitin-Polyak (GLP) projected gradient algorithm for constrained optimization is considered. Essentially, this extension projects an unconstrained descent step on to the feasible region. The determination of the stepsize is divided into two stages. The first is a stepsize sequence, chosen from the range [1,2], converging to unity. This determines the size of the unconstrained step. The second is a stepsize chosen from the range [0,1] according to a stepsize strategy and determines the length of the projected step. Two such strategies are considered. The first bounds the objective function decrease by a conventional linear functional, whereas the second uses a quadratic functional as a bound.The introduction of the unconstrained step provides the option of taking steps that are larger than unity. It is shown that unit steplengths and subsequently superlinear convergence rates are attained if the projection of the quasi-Newton Hessian approximation approaches the projection of the Hessian at the solution. Thus, the requirement in the GLP algorithm for a positive definite Hessian at the solution is relaxed. This allows the use of strictly positive definite Hessian approximations, thereby simplifying the quadratic subproblem involved, even if the Hessian at the solution is not strictly positive definite.This research was funded by a Science and Engineering Research Council Advanced Fellowship. The author is also grateful to an anonymous referee for numerous constructive criticisms and comments.  相似文献   

2.
We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions and strictly convex quadratic functions satisfying specified conditions.  相似文献   

3.
Conjugate gradient methods are efficient methods for minimizing differentiable objective functions in large dimension spaces. However, converging line search strategies are usually not easy to choose, nor to implement. Sun and colleagues (Ann. Oper. Res. 103:161–173, 2001; J. Comput. Appl. Math. 146:37–45, 2002) introduced a simple stepsize formula. However, the associated convergence domain happens to be overrestrictive, since it precludes the optimal stepsize in the convex quadratic case. Here, we identify this stepsize formula with one iteration of the Weiszfeld algorithm in the scalar case. More generally, we propose to make use of a finite number of iterates of such an algorithm to compute the stepsize. In this framework, we establish a new convergence domain, that incorporates the optimal stepsize in the convex quadratic case. The authors thank the associate editor and the reviewer for helpful comments and suggestions. C. Labat is now in postdoctoral position, Johns Hopkins University, Baltimore, MD, United States.  相似文献   

4.
R-linear convergence of the Barzilai and Borwein gradient method   总被引:4,自引:0,他引:4  
Combined with non-monotone line search, the Barzilai and Borwein(BB) gradient method has been successfully extended for solvingunconstrained optimization problems and is competitive withconjugate gradient methods. In this paper, we establish theR-linear convergence of the BB method for any-dimensional stronglyconvex quadratics. One corollary of this result is that theBB method is also locally R-linear convergent for general objectivefunctions, and hence the stepsize in the BB method will alwaysbe accepted by the non-monotone line search when the iterateis close to the solution.  相似文献   

5.
In this paper we view the Barzilai and Borwein (BB) method from a new angle, and present a new adaptive Barzilai and Borwein (NABB) method with a nonmonotone line search for general unconstrained optimization. In the proposed method, the scalar approximation to the Hessian matrix is updated by the Broyden class formula to generate an adaptive stepsize. It is remarkable that the new stepsize is chosen adaptively in the interval which contains the two well-known BB stepsizes. Moreover, for the negative curvature direction, a strategy for the choice of the stepsize is designed to accelerate the convergence rate of the NABB method. Furthermore, we apply the NABB method without any line search to strictly convex quadratic minimization. The numerical experiments show the NABB method is very promising.  相似文献   

6.
新非单调线搜索规则的Lampariello修正对角稀疏拟牛顿算法   总被引:2,自引:0,他引:2  
孙清滢  崔彬  王长钰 《计算数学》2008,30(3):255-268
本文设计了求解无约束最优化问题的新的非单调线搜索规则的Lampariello修正对角稀疏拟牛顿算法.新的步长规则类似于Grippo非单调线搜索规则并包含Grippo非单调线搜索规则作为特例.新的步长规则在每一次线搜索时得到一个相对于Grippo非单调线搜索规则的较大步长,同时保证算法的全局收敛性.数值例子表明算法是有效的,适合求解大规模问题.  相似文献   

7.
8.
基于修正拟牛顿方程, 利用Goldstein-Levitin-Polyak (GLP)投影技术, 建立了 求解带凸集约束的优化问题的两阶段步长Zhang H.C.非单调变尺度梯度投影方法, 证明了算法的全局收敛性. 数值实验表明算法是有效的, 适合求解大规模问题.  相似文献   

9.
投影信赖域策略结合非单调线搜索算法解有界约束非线性半光滑方程组.基于简单有界约束的非线性优化问题构建信赖域子问题,半光滑类牛顿步在可行域投影得到投影牛顿的试探步,获得新的搜索方向,结合非单调线搜索技术得到回代步,获得新的步长.在合理的条件下,证明算法不仅具有整体收敛性且保持超线性收敛速率.引入非单调技术能克服高度非线性的病态问题,加速收敛性进程,得到超线性收敛速率.  相似文献   

10.
A new subspace minimization conjugate gradient algorithm with a nonmonotone Wolfe line search is proposed and analyzed. In the scheme, we propose two choices of the search direction by minimizing a quadratic approximation of the objective function in special subspaces, and state criterions on how to choose the direction. Under given conditions, we obtain the significant conclusion that each choice of the direction satisfies the sufficient descent property. Based on the idea on how the function is close to a quadratic function, a new strategy for choosing the initial stepsize is presented for the line search. With the used nonmonotone Wolfe line search, we prove the global convergence of the proposed method for general nonlinear functions under mild assumptions. Numerical comparisons are given with well-known CGOPT and CG_DESCENT and show that the proposed algorithm is very promising.  相似文献   

11.
Adaptive Two-Point Stepsize Gradient Algorithm   总被引:7,自引:0,他引:7  
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method.  相似文献   

12.
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD   总被引:8,自引:0,他引:8  
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.  相似文献   

13.
In this paper, we present a new algorithm to accelerate the Chambolle gradient projection method for total variation image restoration. The new proposed method considers an approximation of the Hessian based on the secant equation. Combined with the quasi‐Cauchy equations and diagonal updating, we can obtain a positive definite diagonal matrix. In the proposed minimization method model, we use the positive definite diagonal matrix instead of the constant time stepsize in Chambolle's method. The global convergence of the proposed scheme is proved. Some numerical results illustrate the efficiency of this method. Moreover, we also extend the quasi‐Newton diagonal updating method to solve nonlinear systems of monotone equations. Performance comparisons show that the proposed method is efficient. A practical application of the monotone equations is shown and tested on sparse signal reconstruction in compressed sensing. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
The trust region(TR) method for optimization is a class of effective methods.The conic model can be regarded as a generalized quadratic model and it possesses the good convergence properties of the quadratic model near the minimizer.The Barzilai and Borwein(BB) gradient method is also an effective method,it can be used for solving large scale optimization problems to avoid the expensive computation and storage of matrices.In addition,the BB stepsize is easy to determine without large computational efforts.In this paper,based on the conic trust region framework,we employ the generalized BB stepsize,and propose a new nonmonotone adaptive trust region method based on simple conic model for large scale unconstrained optimization.Unlike traditional conic model,the Hessian approximation is an scalar matrix based on the generalized BB stepsize,which resulting a simple conic model.By adding the nonmonotone technique and adaptive technique to the simple conic model,the new method needs less storage location and converges faster.The global convergence of the algorithm is established under certain conditions.Numerical results indicate that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

15.
Summary. This paper studies projected Barzilai-Borwein (PBB) methods for large-scale box-constrained quadratic programming. Recent work on this method has modified the PBB method by incorporating the Grippo-Lampariello-Lucidi (GLL) nonmonotone line search, so as to enable global convergence to be proved. We show by many numerical experiments that the performance of the PBB method deteriorates if the GLL line search is used. We have therefore considered the question of whether the unmodified method is globally convergent, which we show not to be the case, by exhibiting a counter example in which the method cycles. A new projected gradient method (PABB) is then considered that alternately uses the two Barzilai-Borwein steplengths. We also give an example in which this method may cycle, although its practical performance is seen to be superior to the PBB method. With the aim of both ensuring global convergence and preserving the good numerical performance of the unmodified methods, we examine other recent work on nonmonotone line searches, and propose a new adaptive variant with some attractive features. Further numerical experiments show that the PABB method with the adaptive line search is the best BB-like method in the positive definite case, and it compares reasonably well against the GPCG algorithm of Moré and Toraldo. In the indefinite case, the PBB method with the adaptive line search is shown on some examples to find local minima with better solution values, and hence may be preferred for this reason.This work was supported by the EPRSC in UK (no. GR/R87208/01) and the Chinese NSF grant (no. 10171104).Acknowledgement The authors would like to thank the two anonymous referees for their useful suggestions and comments.  相似文献   

16.
A flow search approach is presented in this paper. In the approach, each iterative process involves a subproblem, whose variables are the stepsize parameters. Every feasible solution of the subproblem corresponds to some serial search stages, the stepsize parameters in different search stages may interact mutually, and their optimal values are determined by evaluating the total effect of the interaction. The main idea of the flow search approach is illustrated via the minimization of a convex quadratic function. Based on the flow search approach, some properties of the m-step linear conjugate gradient algorithm are analyzed and new bounds on its convergence rate are also presented. Theoretical and numerical results indicate that the new bounds are better than the well-known ones.  相似文献   

17.
Described here is the structure and theory for a sequential quadratic programming algorithm for solving sparse nonlinear optimization problems. Also provided are the details of a computer implementation of the algorithm along with test results. The algorithm maintains a sparse approximation to the Cholesky factor of the Hessian of the Lagrangian. The solution to the quadratic program generated at each step is obtained by solving a dual quadratic program using a projected conjugate gradient algorithm. An updating procedure is employed that does not destroy sparsity.  相似文献   

18.
In this paper, an algorithm is developed for solving a nonlinear programming problem with linear contraints. The algorithm performs two major computations. First, the search vector is determined by projecting the negative gradient of the objective function on a polyhedral set defined in terms of the gradients of the equality constraints and the near binding inequality constraints. This least-distance program is solved by Lemke's complementary pivoting algorithm after eliminating the equality constraints using Cholesky's factorization. The second major calculation determines a stepsize by first computing an estimate based on quadratic approximation of the function and then finalizing the stepsize using Armijo's inexact line search. It is shown that any accumulation point of the algorithm is a Kuhn-Tucker point. Furthermore, it is shown that, if an accumulation point satisfies the second-order sufficiency optimality conditions, then the whole sequence of iterates converges to that point. Computational testing of the algorithm is presented.  相似文献   

19.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

20.
一类新的非单调记忆梯度法及其全局收敛性   总被引:1,自引:0,他引:1  
在非单调Armijo线搜索的基础上提出一种新的非单调线搜索,研究了一类在该线搜索下的记忆梯度法,在较弱条件下证明了其全局收敛性。与非单调Armijo线搜索相比,新的非单调线搜索在每次迭代时可以产生更大的步长,从而使目标函数值充分下降,降低算法的计算量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号