首页 | 本学科首页   官方微博 | 高级检索  
     


A class of globally convergent conjugate gradient methods
Authors:Yuhong?Dai  author-information"  >  author-information__contact u-icon-before"  >  mailto:dyh@lsec.cc.ac.cn"   title="  dyh@lsec.cc.ac.cn"   itemprop="  email"   data-track="  click"   data-track-action="  Email author"   data-track-label="  "  >Email author,Yaxiang?Yuan  author-information"  >  author-information__contact u-icon-before"  >  mailto:yyx@lsec.cc.ac.cn"   title="  yyx@lsec.cc.ac.cn"   itemprop="  email"   data-track="  click"   data-track-action="  Email author"   data-track-label="  "  >Email author
Affiliation:State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and System Sciences, Chinese Academy of Sciences, Beijing 100080, China
Abstract:Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can beregarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed byDai et al. To analyze this class of methods, we introduce some unified tools that concern a general methodwith the scalarβk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods canuniformly be analyzed.
Keywords:unconstrained optimization   conjugate gradient   line search   global convergence.
本文献已被 CNKI 万方数据 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号