首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD   总被引:8,自引:0,他引:8  
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.  相似文献   

2.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

3.
In this work, a new stabilization scheme for the Gauss-Newton method is defined, where the minimum norm solution of the linear least-squares problem is normally taken as search direction and the standard Gauss-Newton equation is suitably modified only at a subsequence of the iterates. Moreover, the stepsize is computed by means of a nonmonotone line search technique. The global convergence of the proposed algorithm model is proved under standard assumptions and the superlinear rate of convergence is ensured for the zero-residual case. A specific implementation algorithm is described, where the use of the pure Gauss-Newton iteration is conditioned to the progress made in the minimization process by controlling the stepsize. The results of a computational experimentation performed on a set of standard test problems are reported.  相似文献   

4.
Powell has shown that the cyclic coordinate method with exact searches may not converge to a stationary point. In this note we consider a more general class of algorithms for unconstrained minimization, and establish their convergence under the assumption that the objective function has a unique minimum along any line.  相似文献   

5.
Summary For solving an equality constrained nonlinear least squares problem, a globalization scheme for the generalized Gauss-Newton method via damping is proposed. The stepsize strategy is based on a special exact penalty function. Under natural conditions the global convergence of the algorithm is proved. Moreover, if the algorithm converges to a solution having a sufficiently small residual, the algorithm is shown to change automatically into the undamped generalized Gauss-Newton method with a fast linear rate of convergence. The behaviour of the method is demonstrated on hand of some examples taken from the literature.  相似文献   

6.
A new subspace minimization conjugate gradient algorithm with a nonmonotone Wolfe line search is proposed and analyzed. In the scheme, we propose two choices of the search direction by minimizing a quadratic approximation of the objective function in special subspaces, and state criterions on how to choose the direction. Under given conditions, we obtain the significant conclusion that each choice of the direction satisfies the sufficient descent property. Based on the idea on how the function is close to a quadratic function, a new strategy for choosing the initial stepsize is presented for the line search. With the used nonmonotone Wolfe line search, we prove the global convergence of the proposed method for general nonlinear functions under mild assumptions. Numerical comparisons are given with well-known CGOPT and CG_DESCENT and show that the proposed algorithm is very promising.  相似文献   

7.
In this paper, acceptability criteria for the stepsize and global convergence conditions are established for unconstrained minimization methods employing only function values. On the basis of these results, the convergence of an implementable line search algorithm is proved and some global stabilization schemes are described.The authors would like to thank the anonymous referees for their useful suggestions.  相似文献   

8.
An algorithm is presented that minimizes a nonlinear function in many variables under equality constraints by generating a monotonically improving sequence of feasible points along curvilinear search paths obeying an initialvalue system of differential equations. The derivation of the differential equations is based on the idea of a steepest descent curve for the objective function on the feasible region. Our method for small stepsize behaves as the generalized reduced gradient algorithm, whereas for large enough stepsize the constrained equivalent of Newton's method for unconstrained minimization is obtained.  相似文献   

9.
Summary This paper presents a readily implementable algorithm for solving constrained minimization problems involving (possibly nonsmooth) convex functions. The constraints are handled as in the successive quadratic approximations methods for smooth problems. An exact penalty function is employed for stepsize selection. A scheme for automatic limitation of penalty growth is given. Global convergence of the algorithm is established, as well as finite termination for piecewise linear problems. Numerical experience is reported.Sponsored by Program CPBP 02.15  相似文献   

10.
We propose a new inexact line search rule and analyze the global convergence and convergence rate of related descent methods. The new line search rule is similar to the Armijo line-search rule and contains it as a special case. We can choose a larger stepsize in each line-search procedure and maintain the global convergence of related line-search methods. This idea can make us design new line-search methods in some wider sense. In some special cases, the new descent method can reduce to the Barzilai and Borewein method. Numerical results show that the new line-search methods are efficient for solving unconstrained optimization problems. The work was supported by NSF of China Grant 10171054, Postdoctoral Fund of China, and K. C. Wong Postdoctoral Fund of CAS Grant 6765700. The authors thank the anonymous referees for constructive comments and suggestions that greatly improved the paper.  相似文献   

11.
A rank-one algorithm is presented for unconstrained function minimization. The algorithm is a modified version of Davidon's variance algorithm and incorporates a limited line search. It is shown that the algorithm is a descent algorithm; for quadratic forms, it exhibits finite convergence, in certain cases. Numerical studies indicate that it is considerably superior to both the Davidon-Fletcher-Powell algorithm and the conjugate-gradient algorithm.  相似文献   

12.
徐海文 《计算数学》2012,34(1):93-102
邻近点算法(PPA)是一类求解凸优化问题的经典算法, 但往往需要精确求解隐式子问题,于是近似邻近点算法(APPA)在满足一定的近似规则下非精确求解PPA的子问题, 降低了求解难度. 本文利用近似规则的历史信息和随机数扩张预测校正步产生了两个方向, 通过随机数组合两个方向获得了一类凸优化的混合下降算法.在近似规则满足的情况下, 给出了混合下降算法的收敛性证明. 一系列的数值试验表明了混合下降算法的有效性和效率性.  相似文献   

13.
In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line search every limit point produced by the Polak-Ribière iteration is a stationary point of the objective function. Moreover, we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the same good features of the Polak-Ribière method, while avoiding pathological situations. This research was supported by Agenzia Spaziale Italiana, Rome, Italy.  相似文献   

14.
We propose feasible descent methods for constrained minimization that do not make explicit use of the derivative of the objective function. The methods iteratively sample the objective function value along a finite set of feasible search arcs and decrease the sampling stepsize if an improved objective function value is not sampled. The search arcs are obtained by projecting search direction rays onto the feasible set and the search directions are chosen such that a subset approximately generates the cone of first-order feasible variations at the current iterate. We show that these methods have desirable convergence properties under certain regularity assumptions on the constraints. In the case of linear constraints, the projections are redundant and the regularity assumptions hold automatically. Numerical experience with the methods in the linearly constrained case is reported. Received: November 12, 1999 / Accepted: April 6, 2001?Published online October 26, 2001  相似文献   

15.
Recently, similar to Hager and Zhang (SIAM J Optim 16:170–192, 2005), Yu (Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems. Thesis of Doctors Degree, Sun Yat-Sen University, 2007) and Yuan (Optim Lett 3:11–21, 2009) proposed modified PRP conjugate gradient methods which generate sufficient descent directions without any line searches. In order to obtain the global convergence of their algorithms, they need the assumption that the stepsize is bounded away from zero. In this paper, we take a little modification to these methods such that the modified methods retain sufficient descent property. Without requirement of the positive lower bound of the stepsize, we prove that the proposed methods are globally convergent. Some numerical results are also reported.  相似文献   

16.
Descent methods with linesearch in the presence of perturbations   总被引:3,自引:0,他引:3  
We consider the class of descent algorithms for unconstrained optimization with an Armijo-type stepsize rule in the case when the gradient of the objective function is computed inexactly. An important novel feature in our theoretical analysis is that perturbations associated with the gradient are not assumed to be relatively small or to tend to zero in the limit (as a practical matter, we expect them to be reasonably small, so that a meaningful approximate solution can be obtained). This feature makes our analysis applicable to various difficult problems encounted in practice. We propose a modified Armijo-type rule for computing the stepsize which guarantees that the algorithm obtains a reasonable approximate solution. Furthermore, if perturbations are small relative to the size of the gradient, then our algorithm retains all the standard convergence properties of descent methods.  相似文献   

17.
An efficient descent method for unconstrained optimization problems is line search method in which the step size is required to choose at each iteration after a descent direction is determined. There are many ways to choose the step sizes, such as the exact line search, Armijo line search, Goldstein line search, and Wolfe line search, etc. In this paper we propose a new inexact line search for a general descent method and establish some global convergence properties. This new line search has many advantages comparing with other similar inexact line searches. Moreover, we analyze the global convergence and local convergence rate of some special descent methods with the new line search. Preliminary numerical results show that the new line search is available and efficient in practical computation.  相似文献   

18.
Summary This paper presents a modification of the BFGS-method for unconstrained minimization that avoids computation of derivatives. The gradients are approximated by the aid of differences of function values. These approximations are calculated in such a way that a complete convergence proof can be given. The presented algorithm is implementable, no exact line search is required. It is shown that, if the objective function is convex and some usually required conditions hold, the algorithm converges to a solution. If the Hessian matrix of the objective function is positive definite and satisfies a Lipschitz-condition in a neighbourhood of the solution, then the rate of convergence is superlinear.  相似文献   

19.
In this paper, we consider the linearly constrained multiobjective minimization, and we propose a new reduced gradient method for solving this problem. Our approach solves iteratively a convex quadratic optimization subproblem to calculate a suitable descent direction for all the objective functions, and then use a bisection algorithm to find an optimal stepsize along this direction. We prove, under natural assumptions, that the proposed algorithm is well-defined and converges globally to Pareto critical points of the problem. Finally, this algorithm is implemented in the MATLAB environment and comparative results of numerical experiments are reported.  相似文献   

20.
In this paper we view the Barzilai and Borwein (BB) method from a new angle, and present a new adaptive Barzilai and Borwein (NABB) method with a nonmonotone line search for general unconstrained optimization. In the proposed method, the scalar approximation to the Hessian matrix is updated by the Broyden class formula to generate an adaptive stepsize. It is remarkable that the new stepsize is chosen adaptively in the interval which contains the two well-known BB stepsizes. Moreover, for the negative curvature direction, a strategy for the choice of the stepsize is designed to accelerate the convergence rate of the NABB method. Furthermore, we apply the NABB method without any line search to strictly convex quadratic minimization. The numerical experiments show the NABB method is very promising.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号