首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Complexity of Neural Network Approximation with Limited Information: A Worst Case Approach
Abstract:In neural network theory the complexity of constructing networks to approximate input-output functions is of interest. We study this in the more general context of approximating elements f of a normed space F using partial information about f. We assume information about f and the size of the network are limited, as is typical in radial basis function networks. We show complexity can be essentially split into two independent parts, information ε-complexity and neural ε-complexity. We use a worst case setting, and integrate elements of information-based complexity and nonlinear approximation. We consider deterministic and/or randomized approximations using information possibly corrupted by noise. The results are illustrated by examples including approximation by piecewise polynomial neural networks.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号