首页 | 本学科首页   官方微博 | 高级检索  
     


Limitations of the approximation capabilities of neural networks with one hidden layer
Authors:C. K. Chui  Xin Li  H. N. Mhaskar
Affiliation:(1) Department of Mathematics, Texas A&M University, 77843 College Station, TX, USA;(2) Department of Mathematical Sciences, University of Nevada, 89154 Las Vegas, NV, USA;(3) Department of Mathematics, California State University, 90032 Los Angeles, CA, USA
Abstract:Letsge1 be an integer andW be the class of all functions having integrable partial derivatives on [0, 1]s. We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassignedepsi>0 to each function inW. We prove that this number cannot be
$$mathcal{O}( in ^{ - s} log(1/ in ))$$
if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for anydelta>0, a network with
$$mathcal{O}( in ^{ - s - delta } )$$
neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for otherLp norms.The research of this author was supported by NSF Grant # DMS 92-0698.The research of this author was supported, in part, by AFOSR Grant #F49620-93-1-0150 and by NSF Grant #DMS 9404513.
Keywords:Neural networks  Sobolev spaces  spline approximation  ridge functions
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号