On a problem of Hornik |
| |
Authors: | Ting Fan XIE Fei Long CAO |
| |
Affiliation: | Department of Mathematics and Information Sciences, China Jiliang University, Hangzhou 310018, P. R. China |
| |
Abstract: | In 1991, Hornik proved that the collection of single hidden layer feedforward neural networks (SLFNs) with continuous, bounded, and non-constant activation function σ is dense in C(K) where K is a compact set in Rs (see Neural Networks, 4(2), 251-257 (1991)). Meanwhile, he pointed out “Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem”. This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere (a.e.) activation function σ on R, the collection of SLFNs is dense in C(K) if and only if σ is un-constant a.e.. |
| |
Keywords: | Neural networks approximation activation function |
本文献已被 CNKI SpringerLink 等数据库收录! |
| 点击此处可从《数学学报(英文版)》浏览原始摘要信息 |
|
点击此处可从《数学学报(英文版)》下载免费的PDF全文 |
|