首页 | 本学科首页   官方微博 | 高级检索  
     检索      

前向神经网络的神经元分层逐个线性优化快速学习算法
引用本文:谢宏,程浩忠,牛东晓,张国立.前向神经网络的神经元分层逐个线性优化快速学习算法[J].电子学报,2005,33(1):111-114.
作者姓名:谢宏  程浩忠  牛东晓  张国立
作者单位:1. 上海海事大学信息工程学院,上海,200135
2. 上海交通大学电气工程系,上海,200030
3. 华北电力大学工商管理学院,河北保定,071003
摘    要:本文提出了一种新的前向神经网络快速分层学习算法.在此学习算法中,其优化策略为对输出层和隐层神经元的连接权值交替优化.对输出层权值优化算法采用基于广义逆的最小二乘递推算法,对隐层神经元的连接权值采取则对每个神经元逐个进行优化,而且采用正交变换加快每一步学习的计算速度和提高算法的数值稳定性.当学习过程停滞时采用随机扰动的方法摆脱过早收敛.数值实验表明,与BP动量因子法、牛顿型方法和现有的分层优化算法相比,新算法不仅学习速度快学习时间短,而且当网络规模增大时仍然比较有效.

关 键 词:前向神经网络  学习算法  分层优化  神经元逐个优化
文章编号:0372-2112(2005)01-0111-04

A Fast Learning Algorithm for Feedfoward Neural Network Based on the Layer-by-Layer and Neuron-by-Neuron Optimizing Procedure
XIE Hong,CHENG Hao-zhong,NIU Dong-xiao,ZHANG Guo-li.A Fast Learning Algorithm for Feedfoward Neural Network Based on the Layer-by-Layer and Neuron-by-Neuron Optimizing Procedure[J].Acta Electronica Sinica,2005,33(1):111-114.
Authors:XIE Hong  CHENG Hao-zhong  NIU Dong-xiao  ZHANG Guo-li
Abstract:A new rapid layer-wise learning algorithm for feed-forward neural networks is proposed in this paper.In the proposed learning algorithm,the strategy is optimizhing the weights of output layer and hidden layer alternately.A least square recursive algorithm based on generalized inverse is applied to optimize output layer weights,while hidden layer weights are optimized neuron by neuron and an orthogonal transformation is applied to accelerate compute speed for every step and to improve numerical stability.When learning process stop prematurely a stochastic disturbance is applied.Numerical experiments show that the new algorithm is better than other learning algorithms both in learning speed and learning time,and is also efficient for large scale neural networks.
Keywords:feedforward neural network  learning algorithm  layer-wise  neuron-wise
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号