Limitations of the approximation capabilities of neural networks with one hidden layer |
| |
Authors: | C. K. Chui Xin Li H. N. Mhaskar |
| |
Affiliation: | (1) Department of Mathematics, Texas A&M University, 77843 College Station, TX, USA;(2) Department of Mathematical Sciences, University of Nevada, 89154 Las Vegas, NV, USA;(3) Department of Mathematics, California State University, 90032 Los Angeles, CA, USA |
| |
Abstract: | Lets1 be an integer andW be the class of all functions having integrable partial derivatives on [0, 1]s. We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassigned>0 to each function inW. We prove that this number cannot be if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for any>0, a network with neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for otherLp norms.The research of this author was supported by NSF Grant # DMS 92-0698.The research of this author was supported, in part, by AFOSR Grant #F49620-93-1-0150 and by NSF Grant #DMS 9404513. |
| |
Keywords: | Neural networks Sobolev spaces spline approximation ridge functions |
本文献已被 SpringerLink 等数据库收录! |
|