首页 | 本学科首页   官方微博 | 高级检索  
     检索      

CONVERGENCE OF ONLINE GRADIENT METHOD WITH A PENALTY TERM FOR FEEDFORWARD NEURAL NETWORKS WITH STOCHASTIC INPUTS
作者姓名:邵红梅  吴微  李峰
作者单位:[1]DepartmentofAppliedMathematics,DalianUniversityofTechnology,Dalian116024,PRC. [2]DepartmentofAppliedMathematics,DalianUniversityofTechnology,Dalian116024,PRC.
基金项目:Partly supported by the National Natural Science Foundation of China,and the Basic Research Program of the Committee of Science,Technology and Industry of National Defense of China.
摘    要:Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.

关 键 词:前馈神经网络系统  收敛  随机变量  单调性  有界性原理  在线梯度计算法
本文献已被 CNKI 维普 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号