首页 | 本学科首页   官方微博 | 高级检索  
     


Parameter redundancy in neural networks: an application of Chebyshev polynomials
Authors:Bruce Curry
Affiliation:(1) Cardiff University, Cardiff Business School, Aberconway Building, Colum Drive, Cardiff, CF10 3EU, UK
Abstract:This paper deals with feedforward neural networks containing a single hidden layer and with sigmoid/logistic activation function. Training such a network is equivalent to implementing nonlinear regression using a flexible functional form, but the functional form in question is not easy to deal with. The Chebyshev polynomials are suggested as a way forward, providing an approximation to the network which is superior to Taylor series expansions. Application of these approximations suggests that the network is liable to a ‘naturally occurring’ parameter redundancy, which has implications for the training process as well as certain statistical implications. On the other hand, parameter redundancy does not appear to damage the fundamental property of universal approximation.
Keywords:Neural network  Non-linear regression  Taylor series  Chebyshev polynomial  Parameter redundancy
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号