首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
若Х(x)∈W1M(R^d),利用Х构造出具体的平移网络逼近Sobolev空间中的函数并给出逼近阶的估计。  相似文献   

2.
葛彩霞 《应用数学》1999,12(1):47-49
本文研究三层前馈型神经网络的最佳逼近能力,我们证明以多项式函数为隐层神经元作用函数的三层前馈型神经网络,当隐层神经元的个数超过某个给定的界限时,网络的输入输出函数张成一有限维线性空间,从而它可以实现对C(K)的最佳逼近.并且猜测,对非多项式函数的作用函数,若神经元个数有限,则它不具有最佳逼近性质.  相似文献   

3.
朱来义 《数学进展》1995,24(4):327-334
有界单连通区域G,其边界θG=Г∈(1,α),α〉0。本计算节以广义Faber多项式φn(z)的零点为插值结点的Lagrange插值多项式的逼近性质,得到了它对A(G↑-)中的函数的一致逼近阶和平均逼近阶的估计,并且得到了它对E^p(G)中函数的平均逼近阶的估计,还指出关于平均逼近阶的估计是不可改进的。  相似文献   

4.
L~p(R~n)中的神经网络逼近和系统识别   总被引:1,自引:0,他引:1  
本文主要研究函数的叠加对L~p(R~n)中的函数,L~p(R~n)上的非线性连续泛函及非线性连续算子的逼近这些问题与Sigma-Pi型神经网络逼近能力有关。  相似文献   

5.
随机加权法在密度估计中的应用   总被引:2,自引:0,他引:2  
本文给出了概率密度函数的椭机加权估计,证明了承机加权分布与密度估计的标准化估计量的分布的逼近精度可达到o(1/√nh),并且构造了Efn(x)的置信区间,其中fn(x)为密度函数的核估计,h=hn炒估计的窗宽。  相似文献   

6.
提出了一种用于多维函数逼近的进化策略修正泛函网络基函数系数的新算法,并给出了其算法学习过程.利用进化策略的自适应性来确定基函数前的系数,改进了泛函网络的参数通过解方程组来得到这一传统方法.仿真结果表明,这种新的逼近算法简单可行,能够逼近给定的函数到预先给定的精度,具有较快的收敛速度和良好的逼近性能.  相似文献   

7.
本文对[n/n]Padé逼近进行探讨,证明了Pn(x)/Qn(x)是函数f(x)在x=0处的[n/n]Padé逼近,而Qn(x)=Pn(-x)的充要条件是f(x)f(-x)=1,从而使这一类函数的[n/n]Padé逼近计算量减少一半.  相似文献   

8.
本文得到了渐近Fejer点上的(0,1,…,q)Hermite-Fejer插值多项式在边界有二阶连续导数的区域D上平均逼近函数类A(-↑D)中被插值函数的逼近阶,同时还得到了在D上的一致逼近的逼近阶,并指出逼近阶是精确的。  相似文献   

9.
关于苏联科学院数学研究所在函数逼近论方面的工作(下)CA.捷里亚可夫斯基(原苏联科学院数学研究所)6多元函数逼近多元函数逼近的正逆定理最早是由D.Jackson[4]和S.N.Bernstein[5]与一元函数的定理同时给出的,对多元函数的系统研究要...  相似文献   

10.
由[1]知.给定区域内的亚纯函数f(z)的Pade逼近行序列近一致收敛于f(z).本文就(α,β)-Pade逼近拓广了此结果。  相似文献   

11.
近年来,前向神经网络泛逼近的一致性分析一直为众多学者所重视。本文系统分析三层前向网络对于拟差值保序函数族的一致逼近性,其中,转换函数σ是广义Sigmoidal函数。并将此一致性结果用于建立一类新的模糊神经网络(FNN),即折线FNN.研究这类网络对于两个给定的模糊函数的逼近性,相关结论在分析折线FNN的泛逼近性时起关键作用。  相似文献   

12.
Here we study the univariate quantitative approximation of real and complex valued continuous functions on a compact interval or all the real line by quasi-interpolation hyperbolic tangent neural network operators. This approximation is derived by establishing Jackson type inequalities involving the modulus of continuity of the engaged function or its high order derivative. Our operators are defined by using a density function induced by the hyperbolic tangent function. The approximations are pointwise and with respect to the uniform norm. The related feed-forward neural network is with one hidden layer.  相似文献   

13.
We introduce a new procedure for training of artificial neural networks by using the approximation of an objective function by arithmetic mean of an ensemble of selected randomly generated neural networks, and apply this procedure to the classification (or pattern recognition) problem. This approach differs from the standard one based on the optimization theory. In particular, any neural network from the mentioned ensemble may not be an approximation of the objective function.  相似文献   

14.
单隐层神经网络与最佳多项式逼近   总被引:7,自引:1,他引:6  
研究单隐层神经网络逼近问题.以最佳多项式逼近为度量,用构造性方法估计单隐层神经网络逼近连续函数的速度.所获结果表明:对定义在紧集上的任何连续函数,均可以构造一个单隐层神经网络逼近该函数,并且其逼近速度不超过该函数的最佳多项式逼近的二倍.  相似文献   

15.
Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.  相似文献   

16.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.  相似文献   

17.
§1Introduction Inrecentyearstherehasbeengrowinginterestintheproblemofneuralnetworkand relatedapproximation,manyimportantresultsareobtained.Becauseofitsabilityof parallelcomputationinlargescaleandofperfectself-adaptingandapproximation,the neuralnetworkhasbeenwidelyapplied.Theapproximationabilityoftheneuralnetwork dependsonitstopologicalstructure.LetRsbeans-dimensionalEuclidSpaceand(x)isa realfunctiondefinedonRs.When(x)isanexcitationfunctionandx∈Rsisaninput vector,thesimpleneuralnetwork…  相似文献   

18.
虞旦盛  周平 《数学学报》2016,59(5):623-638
首先,引入一种由斜坡函数激发的神经网络算子,建立了其对连续函数逼近的正、逆定理,给出了其本质逼近阶.其次,引入这种神经网络算子的线性组合以提高逼近阶,并且研究了这种组合的同时逼近问题.最后,利用Steklov函数构造了一种新的神经网络算子,建立了其在L~p[a,b]空间逼近的正、逆定理.  相似文献   

19.
This paper studies approximation capability to L2(Rd) functions of incremental constructive feedforward neural networks(FNN) with random hidden units.Two kinds of therelayered feedforward neural networks are considered:radial basis function(RBF) neural networks and translation and dilation invariant(TDI) neural networks.In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks,we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2(Rd) to any accuracy.Our result shows given any non-zero activation function g :R+→R and g(x Rd) ∈ L2(Rd) for RBF hidden units,or any non-zero activation function g(x) ∈ L2(Rd) for TDI hidden units,the incremental network function fn with randomly generated hidden units converges to any target function in L2(Rd) with probability one as the number of hidden units n→∞,if one only properly adjusts the weights between the hidden units and output unit.  相似文献   

20.
Whenapplyingafuzzycontrolmethodoranintelligencecontrolmethodtoaneffectivecontroloveracomplexsystem,itissometimesnecessarytoidentifysystematicstructureandparameters.Mathematically,itisjusttheproblemsofuniversalfunctionalapproximationandfunctionapproxi…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号