首页 | 本学科首页   官方微博 | 高级检索  
     


PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units
Authors:Bo Li  Shanshan Tang & Haijun Yu
Abstract:Deep neural network with rectified linear units (ReLU) is getting more andmore popular recently. However, the derivatives of the function represented by a ReLUnetwork are not continuous, which limit the usage of ReLU network to situations onlywhen smoothness is not required. In this paper, we construct deep neural networkswith rectified power units (RePU), which can give better approximations for smoothfunctions. Optimal algorithms are proposed to explicitly build neural networks withsparsely connected RePUs, which we call PowerNets, to represent polynomials withno approximation error. For general smooth functions, we first project the function totheir polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides anupper bound of the best RePU network approximation error. For smooth functions inhigher dimensional Sobolev spaces, we use fast spectral transforms for tensor-productgrid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deepneural networks: PowerNets with $n$ hidden layers can exactly represent polynomialsup to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness isrequired.
Keywords:Deep neural network   rectified linear unit   rectified power unit   sparse grid   PowerNet.
点击此处可从《数学研究》浏览原始摘要信息
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号