首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
本文研究了连续函数的最佳逼近多项式的点态逼近性质.通过一个具体函数的连续模估计,得到最佳逼近多项式的点态逼近阶估计,并且存在连续函数使得最佳逼近多项式能够满足Timan定理.  相似文献   

2.
研究多维Cardaliguet-Eurrard型神经网络算子的逼近问题.分别给出该神经网络算子逼近连续函数与可导函数的速度估计,建立了Jackson型不等式.  相似文献   

3.
引入了一种新的sigmoidal型神经网络,给出了其对连续函数逼近的点态和整体估计.结果表明这种新的神经网络算子具有多项式逼近所不能达到的很好的逼近速度.为了改进对光滑函数的逼近速度,我们进一步引入了一种新的神经网络的线性组合,并给出了这种组合逼近的点态估计和整体估计.最后给出了一个数值例子.  相似文献   

4.
葛彩霞 《应用数学》1999,12(1):47-49
本文研究三层前馈型神经网络的最佳逼近能力,我们证明以多项式函数为隐层神经元作用函数的三层前馈型神经网络,当隐层神经元的个数超过某个给定的界限时,网络的输入输出函数张成一有限维线性空间,从而它可以实现对C(K)的最佳逼近.并且猜测,对非多项式函数的作用函数,若神经元个数有限,则它不具有最佳逼近性质.  相似文献   

5.
以K-泛函、光滑模为工具,利用函数分解等方法,研究单纯形上多元Durrmeyer多项式逼近连续函数的速度问题,估计了收敛速度,从而完善了Berens,H 等人的工作.  相似文献   

6.
关于weierstrass逼近定理的几点注记   总被引:2,自引:0,他引:2  
Weierstrass逼近定理是函数逼近论中的重要定理之一,定理阐述了闭区间上的连续函数可以用一多项式去逼近.将该定理进行推广:即使一个函数是几乎处处连续的,也不一定具有与连续函数相类似的逼近性质,但是一个处处不连续的函数却有可能具有这样的性质.证明了定义在闭区间上且与连续函数几乎处处相等的函数具有类似的逼近性质,并给出了weierstrass逼近定理的一个推广应用.  相似文献   

7.
单纯形上的Stancu多项式与最佳多项式逼近   总被引:8,自引:2,他引:6  
曹飞龙  徐宗本 《数学学报》2003,46(1):189-196
作为Bernstein多项式的推广,本文定义单纯形上的多元Stancu多项式.以最佳多项式逼近为度量,建立Stancu多项式对连续函数的逼近定理与逼近阶估计,给出Stancu多项式的一个逼近逆定理,从而用最佳多项式逼近刻划Stancu多项式的逼近特征.  相似文献   

8.
王冠闽 《数学研究》1998,31(2):189-196
求了用Jackson算子Jn(f.,x)逼近函数f(x)(∈C2n)时关于二阶连续模的最佳逼近常数:及用阶数不超过n的三角多项式H对连续函数f(x)的最佳逼近En(f),的上界估计:  相似文献   

9.
虞旦盛  周平 《数学学报》2016,59(5):623-638
首先,引入一种由斜坡函数激发的神经网络算子,建立了其对连续函数逼近的正、逆定理,给出了其本质逼近阶.其次,引入这种神经网络算子的线性组合以提高逼近阶,并且研究了这种组合的同时逼近问题.最后,利用Steklov函数构造了一种新的神经网络算子,建立了其在L~p[a,b]空间逼近的正、逆定理.  相似文献   

10.
单纯形上的q-Stancu多项式的最优逼近阶   总被引:1,自引:0,他引:1  
构造了单纯形上的多元q-Stancu多项式,它是著名的Bernstein多项式和Stancu多项式的推广.建立该类多项式逼近连续函数的上、下界估计,进而给出其对连续函数的最优逼近阶(饱和阶)及其特征刻画.此外,还研究了该类多项式逼近连续函数的饱和类.  相似文献   

11.
The first goal of this paper is to establish some properties of the ridge function representation for multivariate polynomials, and the second one is to apply these results to the problem of approximation by neural networks. We find that for continuous functions, the rate of approximation obtained by a neural network with one hidden layer is no slower than that of an algebraic polynomial.  相似文献   

12.
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness.  相似文献   

13.
Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.  相似文献   

14.
§1Introduction Inrecentyearstherehasbeengrowinginterestintheproblemofneuralnetworkand relatedapproximation,manyimportantresultsareobtained.Becauseofitsabilityof parallelcomputationinlargescaleandofperfectself-adaptingandapproximation,the neuralnetworkhasbeenwidelyapplied.Theapproximationabilityoftheneuralnetwork dependsonitstopologicalstructure.LetRsbeans-dimensionalEuclidSpaceand(x)isa realfunctiondefinedonRs.When(x)isanexcitationfunctionandx∈Rsisaninput vector,thesimpleneuralnetwork…  相似文献   

15.
In this article, we study approximation properties of single hidden layer neural networks with weights varying in finitely many directions and with thresholds from an open interval. We obtain a necessary and simultaneously su?cient measure theoretic condition for density of such networks in the space of continuous functions. Further, we prove a density result for neural networks with a specifically constructed activation function and a fixed number of neurons.  相似文献   

16.
We obtain a sharp lower bound estimate for the approximation error of a continuous function by single hidden layer neural networks with a continuous activation function and weights varying on two fixed directions. We show that for a certain class of activation functions this lower bound estimate turns into equality. The obtained result provides us with a method for direct computation of the approximation error. As an application, we give a formula, which can be used to compute instantly the approximation error for a class of functions having second order partial derivatives.  相似文献   

17.
研究球面神经网络的构造与逼近问题.利用球面广义的de la Vallee Poussin平均、球面求积公式及改进的单变量Cardaliaguet-Euvrard神经网络算子,构造具logistic激活函数的单隐层前向网络,并给出了Jackson型误差估计.  相似文献   

18.
This paper studies approximation capability to L2(Rd) functions of incremental constructive feedforward neural networks(FNN) with random hidden units.Two kinds of therelayered feedforward neural networks are considered:radial basis function(RBF) neural networks and translation and dilation invariant(TDI) neural networks.In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks,we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2(Rd) to any accuracy.Our result shows given any non-zero activation function g :R+→R and g(x Rd) ∈ L2(Rd) for RBF hidden units,or any non-zero activation function g(x) ∈ L2(Rd) for TDI hidden units,the incremental network function fn with randomly generated hidden units converges to any target function in L2(Rd) with probability one as the number of hidden units n→∞,if one only properly adjusts the weights between the hidden units and output unit.  相似文献   

19.
Compared with planar hyperplane, fitting data on the sphere has been an important and an active issue in geoscience, metrology, brain imaging, and so on. In this paper, with the help of the Jackson‐type theorem of polynomial approximation on the sphere, we construct spherical feed‐forward neural networks to approximate the continuous function defined on the sphere. As a metric, the modulus of smoothness of spherical function is used to measure the error of the approximation, and a Jackson‐type theorem on the approximation is established. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
We prove that an artificial neural network with multiple hidden layers and akth-order sigmoidal response function can be used to approximate any continuous function on any compact subset of a Euclidean space so as to achieve the Jackson rate of approximation. Moreover, if the function to be approximated has an analytic extension, then a nearly geometric rate of approximation can be achieved. We also discuss the problem of approximation of a compact subset of a Euclidean space with such networks with a classical sigmoidal response function.Dedicated to Dr. C.A. Micchelli on the occasion of his fiftieth birthday, December 1992Research supported in part by AFOSR Grant No. 226 113 and by the AvH Foundation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号