首页 | 本学科首页   官方微博 | 高级检索  
     

具有随机隐单元的渐增前馈神经网络的$L^2(R^d)$逼近能力
引用本文:隆金玲,李正学,南东. 具有随机隐单元的渐增前馈神经网络的$L^2(R^d)$逼近能力[J]. 数学研究及应用, 2010, 30(5): 799-807. DOI: 10.3770/j.issn:1000-341X.2010.05.004
作者姓名:隆金玲  李正学  南东
作者单位:大连理工大学数学科学学院, 辽宁 大连 116024; 2. 东南大学数学系, 江苏 南京 210096;大连理工大学数学科学学院, 辽宁 大连 116024;北京工业大学应用数理学院, 北京 100022
基金项目:国家自然科学基金(Grant No.10871220),大连理工大学``数学+X'项目(Grant No.842328).
摘    要:This paper studies approximation capability to L2(Rd) functions of incremental constructive feedforward neural networks(FNN) with random hidden units.Two kinds of therelayered feedforward neural networks are considered:radial basis function(RBF) neural networks and translation and dilation invariant(TDI) neural networks.In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks,we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2(Rd) to any accuracy.Our result shows given any non-zero activation function g :R+→R and g(x Rd) ∈ L2(Rd) for RBF hidden units,or any non-zero activation function g(x) ∈ L2(Rd) for TDI hidden units,the incremental network function fn with randomly generated hidden units converges to any target function in L2(Rd) with probability one as the number of hidden units n→∞,if one only properly adjusts the weights between the hidden units and output unit.

关 键 词:approximation  incremental feedforward neural networks  RBF neural networks TDI neural networks  random hidden units.
收稿时间:2008-09-25
修稿时间:2009-06-30

$L^2(R^d)$ Approximation Capability of Incremental Constructive Feedforward Neural Networks with Random Hidden Units
Jin Ling LONG,Zheng Xue LI and Dong NAN. $L^2(R^d)$ Approximation Capability of Incremental Constructive Feedforward Neural Networks with Random Hidden Units[J]. Journal of Mathematical Research with Applications, 2010, 30(5): 799-807. DOI: 10.3770/j.issn:1000-341X.2010.05.004
Authors:Jin Ling LONG  Zheng Xue LI  Dong NAN
Affiliation:1. School of Mathematical Sciences,Dalian University of Technology,Liaoning 116024,P.R.China;Department of Mathematics,Southeast University,Jiangsu 210096,P.R.China
2. School of Mathematical Sciences,Dalian University of Technology,Liaoning 116024,P.R.China
3. College of Applied Science,Beijing University of Technology,Beijing 100022,P.R.China
Abstract:This paper studies approximation capability to $L^2(R^d)$ functions of incremental constructive feedforward neural networks (FNN) with random hidden units. Two kinds of there-layered feedforward neural networks are considered: radial basis function (RBF) neural networks and translation and dilation invariant (TDI) neural networks. In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks, we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in $L^2(R^d)$ to any accuracy. Our result shows given any non-zero activation function $g: R^+rightarrow R$ and $g(left|xright|_{R^d})in L^2(R^d)$ for RBF hidden units, or any non-zero activation function $g(x)in L^2(R^d)$ for TDI hidden units, the incremental network function $f_n$ with randomly generated hidden units converges to any target function in $L^2(R^d)$ with probability one as the number of hidden units $nrightarrow infty$, if one only properly adjusts the weights between the hidden units and output unit.
Keywords:approximation   incremental feedforward neural networks   RBF neural networks   TDI neural networks   random hidden units.
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《数学研究及应用》浏览原始摘要信息
点击此处可从《数学研究及应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号