首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Study on a supermemory gradient method for the minimization of functions
Authors:E E Cragg  A V Levy
Institution:1. Department of Mechanical and Aerospace Engineering and Materials Science, Rice University, Houston, Texas
Abstract:A new accelerated gradient method for finding the minimum of a functionf(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows: $$\tilde x = x + \delta x,\delta x = - \alpha g(x) + \sum\limits_{i = 1}^k {\beta _i \delta x_i }$$ wherex is ann-vector,g(x) is the gradient of the functionf(x), δx is the change in the position vector for the iteration under consideration, and δx i is the change in the position vector for theith previous iteration. The quantities α and β i are scalars chosen at each step so as to yield the greatest decrease in the function; the scalark denotes the number of past iterations remembered. Fork=1, the algorithm reduces to the memory gradient method of Ref. 2; it contains at most two undetermined multipliers to be optimized by a two-dimensional search. Fork=n?1, the algorithm contains at mostn undetermined multipliers to be optimized by ann-dimensional search. Two nonquadratic test problems are considered. For both problems, the memory gradient method and the supermemory gradient method are compared with the Fletcher-Reeves method and the Fletcher-Powell-Davidon method. A comparison with quasilinearization is also presented.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号