首页 | 本学科首页   官方微博 | 高级检索  
     

A Rapidly Convergence Algorithm for Linear Search and its Application
引用本文:Jianliang Li Hua Zhu Xianzhong Zhou Wenjing Song. A Rapidly Convergence Algorithm for Linear Search and its Application[J]. 高等学校计算数学学报(英文版), 2006, 15(4): 299-305
作者姓名:Jianliang Li Hua Zhu Xianzhong Zhou Wenjing Song
作者单位:Jianliang Li,Hua Zhu,Xianzhong Zhou and Wenjing Song Department of Applied Mathematics,Nanjing University of Science and Technology,Nanjing 210094,China. School of Management Science Engineering,Nanjing University,Nanjing 210093,China.
摘    要:The essence of the linear search is one-dimension nonlinear minimization problem, which is an important part of the multi-nonlinear optimization, it will be spend the most of operation count for solving optimization problem. To improve the efficiency, we set about from quadratic interpolation, combine the advantage of the quadratic convergence rate of Newton's method and adopt the idea of Anderson-Bjorck extrapolation, then we present a rapidly convergence algorithm and give its corresponding convergence conclusions. Finally we did the numerical experiments with the some well-known test functions for optimization and the application test of the ANN learning examples. The experiment results showed the validity of the algorithm.

关 键 词:线性搜索 非线性优化 加速收敛 学习算法
收稿时间:2004-05-09
修稿时间:2005-03-04

A Rapidly Convergence Algorithm for Linear Search and its Application
Jianliang Li,Hua Zhu,Xianzhong Zhou,Wenjing Song. A Rapidly Convergence Algorithm for Linear Search and its Application[J]. Numerical Mathematics A Journal of Chinese Universities English Series, 2006, 15(4): 299-305
Authors:Jianliang Li  Hua Zhu  Xianzhong Zhou  Wenjing Song
Abstract:The essence of the linear search is one-dimension nonlinear minimization problem, which is an important part of the multi-nonlinear optimization, it will be spend the most of operation count for solving optimization problem. To improve the efficiency, we set about from quadratic interpolation, combine the advantage of the quadratic convergence rate of Newton's method and adopt the idea of Anderson-Bjorck extrapolation, then we present a rapidly convergence algorithm and give its corresponding convergence conclusions. Finally we did the numerical experiments with the some well-known test functions for optimization and the application test of the ANN learning examples. The experiment results showed the validity of the algorithm.
Keywords:Linear Search  nonlinear optimization  accelerating convergence: learning algorithm.
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号