首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Adaptive Two-Point Stepsize Gradient Algorithm   总被引:7,自引:0,他引:7  
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method.  相似文献   

2.
刘景辉  马昌凤  陈争 《计算数学》2012,34(3):275-284
在传统信赖域方法的基础上, 提出了求解无约束最优化问题的一个新的带线搜索的信赖域算法. 该算法采用大步长 Armijo 线搜索技术获得迭代步长, 克服了每次迭代求解信赖域子问题时计算量较大的缺点, 因而适用于求解大型的优化问题. 在适当的条件下, 我们证明了算法的全局收敛性. 数值实验结果表明本文所提出的算法是有效的.  相似文献   

3.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods.  相似文献   

4.
In this paper, we propose a globally convergent Polak-Ribière-Polyak (PRP)conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic model Φ(t):= f(xk)+tgTkdk+1/2t2dTkQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.  相似文献   

5.
A. El Ghali 《Optimization》2016,65(7):1497-1518
We present an implementable algorithm for minimizing a convex function which is not necessarily differentiable subject to linear equality constraints and to nonnegativity bounds on the variables. The algorithm is based on extending the variant proposed by Luenberger to the nondifferentiable case and using the bundle techniques introduced by Lemaréchal to approximate the subdifferential of the objective function. In particular, at each iteration, we compute a search direction by solving a quadratic subproblem, and an inexact line search along this direction yields a decrease in the objective value. Under some assumptions, the convergence of the proposed algorithm is analysed. Finally, some numerical results are presented, which show that the algorithm performs efficiently.  相似文献   

6.
Two natural and efficient stopping criteria are derived for conjugate gradient (CG) methods, based on iteration parameters. The derivation makes use of the inner product matrix B-defining the CG method. In particular, the relationship between the eigenvalues and B-norm of a matrix is investigated, and it is shown that the ratio of largest to smallest eigenvalues defines the B-condition number of the matrix. Upper and lower bounds on various measures of the error are also given. The compound stopping criterion presented here is an obvious default in software packages because it does not require any additional norm computations.  相似文献   

7.
景书杰  赵海燕 《数学杂志》2014,34(6):1193-1199
本文研究了约束优化问题min x∈Ωf(x).利用共轭梯度算法与GLP梯度投影思想相结合的方法,构造了一个新的共轭梯度投影算法,并在Wolfe线搜索下获得了该算法的全局收敛性结果.  相似文献   

8.
《Optimization》2012,61(7):1027-1042
In order to take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest two hybridizations of these methods based on Andrei's approach of hybridizing the CG parameters convexly and Powell's approach of nonnegative restriction of the CG parameters. The hybridization parameter in our methods is computed from a modified secant equation obtained based on the search direction of the Hager–Zhang nonlinear CG method. We show that if the line search fulfils the Wolfe conditions, then one of our methods is globally convergent for uniformly convex functions and the other is globally convergent for general functions. We report some numerical results demonstrating the efficiency of our methods in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

9.
A readily implementable algorithm is given for minimizing a (possibly nondifferentiable and nonconvex) locally Lipschitz continuous functionf subject to linear constraints. At each iteration a polyhedral approximation tof is constructed from a few previously computed subgradients and an aggregate subgradient, which accumulates the past subgradient information. This aproximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a stepsize is found by an approximate line search. All the algorithm's accumulation points are stationary. Moreover, the algorithm converges whenf happens to be convex.  相似文献   

10.
There are many applications related to singly linearly constrained quadratic programs subjected to upper and lower bounds. In this paper, a new algorithm based on secant approximation is provided for the case in which the Hessian matrix is diagonal and positive definite. To deal with the general case where the Hessian is not diagonal, a new efficient projected gradient algorithm is proposed. The basic features of the projected gradient algorithm are: 1) a new formula is used for the stepsize; 2) a recently-established adaptive non-monotone line search is incorporated; and 3) the optimal stepsize is determined by quadratic interpolation if the non-monotone line search criterion fails to be satisfied. Numerical experiments on large-scale random test problems and some medium-scale quadratic programs arising in the training of Support Vector Machines demonstrate the usefulness of these algorithms. This work was supported by the EPRSC in UK (no. GR/R87208/01) and the Chinese NSF grants (no. 10171104 and 40233029).  相似文献   

11.
A family of accelerated conjugate direction methods, corresponding to the Broyden family of quasi-Newton methods, is described. It is shown thatall members of the family generate the same sequence of points approximating the optimum and the same sequence of search directions, provided only that each direction vector is normalized before the stepsize to be taken in that direction is determined.With minimal restrictions on how the stepsize is determined (sufficient only for convergence), the accelerated methods applied to the optimization of a function ofn variables are shown to have an (n+1)-step quadratic rate of convergence. Furthermore, the information needed to generate an accelerating step can be stored in a singlen-vector, rather than the usualn×n symmetric matrix, without changing the theoretical order of convergence.The relationships between this family of methods and existing conjugate direction methods are discussed, and numerical experience with two members of the family is presented.This research was sponsored by the United States Army under Contract No. DAAG29-75-C-0024.The author gratefully acknowledges the valuable assistance of Julia H. Gray, of the Mathematics Research Center, University of Wisconsin, Madison, who painstakingly programmed these methods and obtained the computational results.  相似文献   

12.
A quasi-Newton extension of the Goldstein-Levitin-Polyak (GLP) projected gradient algorithm for constrained optimization is considered. Essentially, this extension projects an unconstrained descent step on to the feasible region. The determination of the stepsize is divided into two stages. The first is a stepsize sequence, chosen from the range [1,2], converging to unity. This determines the size of the unconstrained step. The second is a stepsize chosen from the range [0,1] according to a stepsize strategy and determines the length of the projected step. Two such strategies are considered. The first bounds the objective function decrease by a conventional linear functional, whereas the second uses a quadratic functional as a bound.The introduction of the unconstrained step provides the option of taking steps that are larger than unity. It is shown that unit steplengths and subsequently superlinear convergence rates are attained if the projection of the quasi-Newton Hessian approximation approaches the projection of the Hessian at the solution. Thus, the requirement in the GLP algorithm for a positive definite Hessian at the solution is relaxed. This allows the use of strictly positive definite Hessian approximations, thereby simplifying the quadratic subproblem involved, even if the Hessian at the solution is not strictly positive definite.This research was funded by a Science and Engineering Research Council Advanced Fellowship. The author is also grateful to an anonymous referee for numerous constructive criticisms and comments.  相似文献   

13.
一类新的非单调记忆梯度法及其全局收敛性   总被引:1,自引:0,他引:1  
在非单调Armijo线搜索的基础上提出一种新的非单调线搜索,研究了一类在该线搜索下的记忆梯度法,在较弱条件下证明了其全局收敛性。与非单调Armijo线搜索相比,新的非单调线搜索在每次迭代时可以产生更大的步长,从而使目标函数值充分下降,降低算法的计算量。  相似文献   

14.
In this paper, we propose a new affine scaling trust-region algorithm in association with nonmonotonic interior backtracking line search technique for solving nonlinear equality systems subject to bounds on variables. The trust-region subproblem is defined by minimizing a squared Euclidean norm of linear model adding the augmented quadratic affine scaling term subject only to an ellipsoidal constraint. By using both trust-region strategy and interior backtracking line search technique, each iterate switches to backtracking step generated by the general trust-region subproblem and satisfies strict interior point feasibility by line search backtracking technique. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. A nonmonotonic criterion should bring about speeding up the convergence progress in some ill-conditioned cases. The results of numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

15.
欧宜贵  侯定丕 《数学季刊》2003,18(2):140-145
In this paper, a new trust region algorithm for unconstrained LC1 optimization problems is given. Compare with those existing trust regiion methods, this algorithm has a different feature: it obtains a stepsize at each iteration not by soloving a quadratic subproblem with a trust region bound, but by solving a system of linear equations. Thus it reduces computational complexity and improves computation efficiency. It is proven that this algorithm is globally convergent and locally superlinear under some conditions.  相似文献   

16.
提出非线性等式和有界约束优化问题的结合非单调技术的仿射信赖域方法. 结合信赖域方法和内点回代线搜索技术, 每一步迭代转到由一般信赖域子问题产生的回代步中且满足严格内点可行条件. 在合理的假设条件下, 证明了算法的整体收敛性和局部超线性收敛速率. 最后, 数值结果表明了所提供的算法具有有效性.  相似文献   

17.
共轭梯度法是求解无约束优化问题的一种重要的方法.本文提出一族新的共轭梯度法,证明了其在推广的Wolfe非精确线搜索条件下具有全局收敛性.最后对算法进行了数值实验,实验结果验证了该算法的有效性.  相似文献   

18.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

19.
Conjugate gradient methods are efficient methods for minimizing differentiable objective functions in large dimension spaces. However, converging line search strategies are usually not easy to choose, nor to implement. Sun and colleagues (Ann. Oper. Res. 103:161–173, 2001; J. Comput. Appl. Math. 146:37–45, 2002) introduced a simple stepsize formula. However, the associated convergence domain happens to be overrestrictive, since it precludes the optimal stepsize in the convex quadratic case. Here, we identify this stepsize formula with one iteration of the Weiszfeld algorithm in the scalar case. More generally, we propose to make use of a finite number of iterates of such an algorithm to compute the stepsize. In this framework, we establish a new convergence domain, that incorporates the optimal stepsize in the convex quadratic case. The authors thank the associate editor and the reviewer for helpful comments and suggestions. C. Labat is now in postdoctoral position, Johns Hopkins University, Baltimore, MD, United States.  相似文献   

20.
In this paper, we propose a new Dantzig–Wolfe decomposition for degenerate linear programs with the non degenerate constraints in the master problem and the degenerate ones in the subproblem. We propose three algorithms. The first one, where some set of variables of the original problem are added to the master problem, corresponds to the Improved Primal Simplex algorithm (IPS) presented recently by Elhallaoui et al. [7]. In the second one, some extreme points of the subproblem are added as columns in the master problem. The third algorithm is a mixed implementation that adds some original variables and some extreme points of a subproblem to the master problem. Experimental results on some degenerate instances show that the proposed algorithms yield computational times that are reduced by an average factor ranging from 3.32 to 13.16 compared to the primal simplex of CPLEX.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号