首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization   总被引:6,自引:0,他引:6  
For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved.  相似文献   

2.
In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems.  相似文献   

3.
黄海 《经济数学》2011,28(2):25-28
在修正PRP共轭梯度法的基础上,提出了求解无约束优化问题的一个充分下降共轭梯度算法,证明了算法在Wolfe线搜索下全局收敛,并用数值实验表明该算法具有较好的数值结果.  相似文献   

4.
提出一类求解无约束最优化问题的混合共轭梯度算法,新算法有机地结合了DY算法和HS算法的优点,并采用非单调线搜索技术在较弱条件下证明了算法的全局收敛性.数值实验表明新算法具有良好的计算效能.  相似文献   

5.
In this paper, we introduce a class of nonmonotone conjugate gradient methods, which include the well-known Polak–Ribière method and Hestenes–Stiefel method as special cases. This class of nonmonotone conjugate gradient methods is proved to be globally convergent when it is applied to solve unconstrained optimization problems with convex objective functions. Numerical experiments show that the nonmonotone Polak–Ribière method and Hestenes–Stiefel method in this nonmonotone conjugate gradient class are competitive vis-à-vis their monotone counterparts.  相似文献   

6.
The gradient method for the symmetric positive definite linear system is as follows
(1)
where is the residual of the system at xk and αk is the stepsize. The stepsize is optimal in the sense that it minimizes the modulus , where λ1 and λn are the minimal and maximal eigenvalues of A respectively. Since λ1 and λn are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as . At the same time, the minimal and maximal eigenvalues, λ1 and λn, of A and their corresponding eigenvectors can be obtained. This research was initiated while the first author was visiting The Hong Kong Polytechnic University. This author was supported by the Chinese NSF grants (No. 40233029 and 101071104) and an innovation fund of Chinese Academy of Sciences. This author was supported by a grant from the Research Committee of the Hong Kong Polytechnic University (A-PC36).  相似文献   

7.
It is well known that the norm of the gradient may be unreliable as a stopping test in unconstrained optimization, and that it often exhibits oscillations in the course of the optimization. In this paper we present results descibing the properties of the gradient norm for the steepest descent method applied to quadratic objective functions. We also make some general observations that apply to nonlinear problems, relating the gradient norm, the objective function value, and the path generated by the iterates.  相似文献   

8.
An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization   总被引:22,自引:0,他引:22  
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.  相似文献   

9.
This paper discusses the global convergence of a class of nonmonotone conjugate gradient methods (NM methods) for nonconvex object functions. This class of methods includes the nonmonotone counterpart of modified Polak-Ribiere method and modified Hestenes-Stiefel method as special cases.  相似文献   

10.
本文对求解无约束优化问题提出一类三项混合共轭梯度算法,新算法将Hestenes- stiefel算法与Dai-Yuan方法相结合,并在不需给定下降条件的情况下,证明了算法在Wolfe线搜索原则下的收敛性,数值试验亦显示出这种混合共轭梯度算法较之HS和PRP的优势.  相似文献   

11.
In this paper, a new steplength formula is proposed for unconstrained optimization,which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugate gradient methods with this formula is analyzed,and the corresponding results are as follows:(1) The DY method globally converges for a strongly convex LC~1 objective function;(2) The CD method, the FR method, the PRP method and the LS method globally converge for a general, not necessarily convex, LC~1 objective function.  相似文献   

12.
This paper proposes a line search technique to satisfy a relaxed form of the strong Wolfe conditions in order to guarantee the descent condition at each iteration of the Polak-Ribière-Polyak conjugate gradient algorithm. It is proved that this line search algorithm preserves the usual convergence properties of any descent algorithm. In particular, it is shown that the Zoutendijk condition holds under mild assumptions. It is also proved that the resulting conjugate gradient algorithm is convergent under a strong convexity assumption. For the nonconvex case, a globally convergent modification is proposed. Numerical tests are presented. This paper is based on an earlier work presented at the International Symposium on Mathematical Programming in Lausanne in 1997. The author thanks J. C. Gilbert for his advice and M. Albaali for some recent discussions which motivated him to write this paper. Special thanks to G. Liu, J. Nocedal, and R. Waltz for the availability of the software CG+ and to one of the referees who indicated to him the paper of Grippo and Lucidi (Ref. 1).  相似文献   

13.
共轭梯度法是求解大规模元约束优化同题的一种有效方法,本文提出一种新的共轭梯度法,证明了在推广的Wolfe线搜索条件下方法具有全局收敛性。最后对算法进行了数值试验,试验结果表明该算法具有良好的收敛性和有效性。  相似文献   

14.
In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein’s line search is proposed. By using the negative curvature information from the Hessian, the sequence generated is shown to converge to a stationary point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency of our new method. This work was supported by the National Natural Science Foundation of China (Grant No. 10231060) and the Specialized Research Fund of Doctoral Program of Higher Education of China (Grant No. 20040319003)  相似文献   

15.
In this paper,an unconstrained optimization method using the nonmonotone second order Goldstein's line search is proposed.By using the negative curvature information from the Hessian, the sequence generated is shown to converge to a stationary point with the second order optimality conditions.Numerical tests on a set of standard test problems confirm the efficiency of our new method.  相似文献   

16.
梯度法简述     
孙聪  张亚 《运筹学学报》2021,25(3):119-132
梯度法是一类求解优化问题的一阶方法。梯度法形式简单、计算开销小,在大规模问题的求解中得到了广泛应用。系统地介绍了光滑无约束问题梯度法的迭代格式、理论框架。梯度法中最重要的参数是步长,步长的选取直接决定了梯度法的收敛性质与收敛速度。从线搜索框架、近似技巧、随机技巧和交替重复步长四方面介绍了梯度步长的构造思想及相应梯度法的收敛性结果,还对非光滑及约束问题的梯度法、梯度法加速技巧和随机梯度法等扩展方向做了简要介绍。  相似文献   

17.
Globally Convergent Algorithms for Unconstrained Optimization   总被引:2,自引:0,他引:2  
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed.  相似文献   

18.
Wolfe搜索下记忆梯度法的收敛性   总被引:7,自引:0,他引:7  
本文研究无约束优化问题的记忆梯度算法,分析了Wolfe搜索下该算法的全局收敛性和线性收敛速度。初步数值试验结果表明了算法的有效性。  相似文献   

19.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.  相似文献   

20.
A new conjugate gradient method is proposed in this paper. For any (inexact) line search, our scheme satifies the sufficient descent property. The method is proved to be globally convergent if the restricted Wolfe-Powell line search is used. Preliminary numerical result shows that it is efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号