首页 | 本学科首页   官方微博 | 高级检索  
     检索      

RBF神经网络的梯度下降训练方法中的学习步长优化
引用本文:林嘉宇,刘荧.RBF神经网络的梯度下降训练方法中的学习步长优化[J].信号处理,2002,18(1):43-48.
作者姓名:林嘉宇  刘荧
作者单位:国防科技大学电子科学与工程学院,长沙,410073
基金项目:宽带光纤传输与通信系统技术国家重点实验室开放基金资助
摘    要:梯度下降法是训练RBF神经网络的一种有效方法。和其他基于下降法的算法一样,RBF神经网络的梯度下降训练方法中也存在学习步长的取值问题。本文基于误差能量函数对学习步长的二阶Taylor展开,构造了一种优化学习步长的方法,进行了较详细的推导:实验表明,本方法可有效地加速梯度下降法的收敛速度、提高其性能。该方法的思想可以用于其他基于下降法的学习步长的优化中。

关 键 词:梯度下降法  学习步长优化  RBF神经网络
修稿时间:2001年6月19日

Learning Rate Refining for Gradient Descent Method of RBF Neural Networks
Lin Jiayu,Liu Ying.Learning Rate Refining for Gradient Descent Method of RBF Neural Networks[J].Signal Processing,2002,18(1):43-48.
Authors:Lin Jiayu  Liu Ying
Abstract:Gradient descent (GD) method is one of the efficient means to train radial basis function (RBF) neural networks. As the other methods based on descent principle, there exists the problem of how to exploit the learning rates. This paper presents a new algorithm to refine the learning rates, based on second-order Taylor expansion of the error energy function with respect to learning rate, at some value decided by "award-punish" strategy. Detailed deduction of the algorithm is given. Simulation studies show that this algorithm can accelerate the convergence and improve the performance of GD method. Further more, the performances of non-linear modeling of speech signals using RBF network trained by GD method are compared. In the case set up in this paper (with 500 iterations per processed frame), the nonlinear modeling using the algorithm out-performs the modeling without it over 2dB.
Keywords:Gradient-descent method    learning rate refining    RBF neural networks
本文献已被 CNKI 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号