首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
距离空间中插值神经网络的误差估计   总被引:2,自引:0,他引:2  
研究距离空间中的神经网络插值与逼近问题.首先引进一类广义的激活函数,用比较简洁的方法讨论距离空间中插值神经网络的存在性,然后给出插值神经网络逼近连续函数的误差估计.  相似文献   

2.
本文研究了由高斯核构成的拟插值算子在闭区间上的近似逼近问题.利用函数延拓和近似单位分划的方法,构造了拟插值算子,并得到了一致范数下的逼近阶估计.  相似文献   

3.
利用高斯型隶属函数和采样数据得到了三层模糊前向神经网络。该网络模型利用权值直接确定法得到了最优权值,并依据采样数据中的插值样本较好确定了单隐层神经元个数。该网络是近似插值神经网络。仿真实验表明,高斯型模糊前向神经网络具有逼近精度高、网络结构简单、良好的去噪性和实时性高等优点。  相似文献   

4.
周庆华 《中国科学A辑》2007,37(3):375-384
本文我们研究了直接法中的二次插值模型逼近方法.在单纯形方法的基础上,通过组合算法迭代所体现的问题的局部信息来构建新的搜索方向,从而构建新的搜索子空间.然后,在所得的新的搜索子空间求解原目标函数的近似二次模型.我们的动机是利用算法前面的步骤所体现的信息来构造更有可能下降快速的方向.实验表明,对于大多数测试问题,我们的方法可以显著的减少函数值的计算次数.  相似文献   

5.
主要研究了Bernstein-Kantorovich拟插值在Orlicz空间中的逼近性质,首先证明了拟插值在Orlicz空间中的有界性,应用H?lder不等式、Jensen不等式以及Orlicz空间中K-泛函与光滑模的等价关系给出了该拟插值在Orlicz空间中逼近的正定理、逆定理和等价定理.  相似文献   

6.
为了克服前向神经网络的固有缺陷,提出了基于采样数据建立的含单隐层神经元的模糊前向神经网络.该网络模型利用权值直接确定法得到了最优权值,网络结构可以随采样数据的多少,自主设定隐层神经元,完成了近似插值与精确插值的转换.计算机数值仿真实验表明,模糊前向神经网络具有逼近精度高、网络结构可调和实时性高的优点,并且可以实现预测和去噪.  相似文献   

7.
论文研究了Lagrange插值和Hermite-Fejer插值在Orlicz空间内的逼近问题,并利用函数逼近论中的常用方法和技巧以及K泛函、连续模、Holder不等式、凸函数的Jensen不等式等工具得到了这两类插值在Orlicz空间内逼近的Stechkin-Marchaud不等式.  相似文献   

8.
虞旦盛  周平 《数学学报》2016,59(5):623-638
首先,引入一种由斜坡函数激发的神经网络算子,建立了其对连续函数逼近的正、逆定理,给出了其本质逼近阶.其次,引入这种神经网络算子的线性组合以提高逼近阶,并且研究了这种组合的同时逼近问题.最后,利用Steklov函数构造了一种新的神经网络算子,建立了其在L~p[a,b]空间逼近的正、逆定理.  相似文献   

9.
张旭  吴嘎日迪 《应用数学》2018,31(1):237-242
在Orlicz空间内研究问题是函数逼近论研究方向里的重要分支之一.插值逼近问题有着深远的理论意义和广泛的应用前景.本文在连续函数空间和L_p空间内研究插值逼近方法的基础上,研究一种Lagrange线性组合插值算子和Hermite插值算子在Orlicz空间内的逼近问题,利用连续模,Holder等式,Hardy-Littlewood极大函数,给出两类插值的逼近度估计,所得的结果更精确于前人的同类结果.  相似文献   

10.
用构造最优局部逼近空间的方法对Lagrange型四边形单位分解有限元法进行了最优误差分析.单位分解取Lagrange型四边形上的标准双线性基函数,构造了一个特殊的局部多项式逼近空间,给出了具有2阶再生性的Lagrange型四边形单位分解有限元插值格式,从而得到了高于局部逼近阶的最优插值误差.  相似文献   

11.
Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.  相似文献   

12.
We prove a general interpolation theorem for linear operators acting simultaneously in several approximation spaces which are defined by multiparametric approximation families. As a consequence, we obtain interpolation results for finite families of Besov spaces of various types including those determined by a given set of mixed differences.  相似文献   

13.
Given a triangular array of points on satisfying certain minimal separation conditions, a classical theorem of Szabados asserts the existence of polynomial operators that provide interpolation at these points as well as a near-optimal degree of approximation for arbitrary continuous functions on the interval. This paper provides a simple, functional-analytic proof of this fact. This abstract technique also leads to similar results in general situations where an analogue of the classical Jackson-type theorem holds. In particular, it allows one to obtain simultaneous interpolation and a near-optimal degree of approximation by neural networks on a cube, radial-basis functions on a torus, and Gaussian networks on Euclidean space. These ideas are illustrated by a discussion of simultaneous approximation and interpolation by polynomials and also by zonal-function networks on the unit sphere in Euclidean space.

  相似文献   


14.
In this paper, we introduce a type of approximation operators of neural networks with sigmodal functions on compact intervals, and obtain the pointwise and uniform estimates of the ap- proximation. To improve the approximation rate, we further introduce a type of combinations of neurM networks. Moreover, we show that the derivatives of functions can also be simultaneously approximated by the derivatives of the combinations. We also apply our method to construct approximation operators of neural networks with sigmodal functions on infinite intervals.  相似文献   

15.
In this paper, a family of interpolation neural network operators are introduced. Here, ramp functions as well as sigmoidal functions generated by central B-splines are considered as activation functions. The interpolation properties of these operators are proved, together with a uniform approximation theorem with order, for continuous functions defined on bounded intervals. The relations with the theory of neural networks and with the theory of the generalized sampling operators are discussed.  相似文献   

16.
In the following paper, we present a brief and easily accessible introduction to the theory of neural networks under special emphasis on the rôle of pure and applied mathematics in this interesting field of research. In order to allow a quick and direct approach even for nonspecialists, we only consider three-layer feedforward networks with sigmoidal transfer functions and do not cover general multi-layer, recursive or radial-basis-function networks. Moreover, we focus our attention on density and complexity results while construction problems based on operator techniques are not discussed in detail. Especially, in connection with complexity results, we show that neural networks in general have the power to approximate certain function spaces with a minimal number of free parameters. In other words, under this specific point of view neural networks represent one of the best possible approximation devices available. Besides pointing out this remarkable fact, the main motivation for presenting this paper is to give some more mathematicians an idea of what is going on in the theory of neural networks and, perhaps, to encourage, at least a few of them, to start working in this highly interdisciplinary and promising field, too.  相似文献   

17.
《Mathematische Nachrichten》2017,290(2-3):226-235
In this paper, we develop the theory for a family of neural network (NN) operators of the Kantorovich type, in the general setting of Orlicz spaces. In particular, a modular convergence theorem is established. In this way, we study the above family of operators in many instances of useful spaces by a unique general approach. The above NN operators provide a constructive approximation process, in which the coefficients, the weights, and the thresholds of the networks needed in order to approximate a given function f , are known. At the end of the paper, several examples of Orlicz spaces, and of sigmoidal activation functions for which the present theory can be applied, are studied in details.  相似文献   

18.
In this work we construct subdivision schemes refining general subsets of ? n and study their applications to the approximation of set-valued functions. Differently from previous works on set-valued approximation, our methods are developed and analyzed in the metric space of Lebesgue measurable sets endowed with the symmetric difference metric. The construction of the set-valued subdivision schemes is based on a new weighted average of two sets, which is defined for positive weights (corresponding to interpolation) and also when one weight is negative (corresponding to extrapolation). Using the new average with positive weights, we adapt to sets spline subdivision schemes computed by the Lane–Riesenfeld algorithm, which requires only averages of pairs of numbers. The averages of numbers are then replaced by the new averages of pairs of sets. Among other features of the resulting set-valued subdivision schemes, we prove their monotonicity preservation property. Using the new weighted average of sets with both positive and negative weights, we adapt to sets the 4-point interpolatory subdivision scheme. Finally, we discuss the extension of the results obtained in metric spaces of sets, to general metric spaces endowed with an averaging operation satisfying certain properties.  相似文献   

19.
Since the spherical Gaussian radial function is strictly positive definite, the authors use the linear combinations of translations of the Gaussian kernel to interpolate the scattered data on spheres in this article. Seeing that target functions are usually outside the native spaces, and that one has to solve a large scaled system of linear equations to obtain combinatorial coefficients of interpolant functions, the authors first probe into some problems about interpolation with Gaussian radial functions. Then they construct quasiinterpolation operators by Gaussian radial function, and get the degrees of approximation. Moreover, they show the error relations between quasi-interpolation and interpolation when they have the same basis functions. Finally, the authors discuss the construction and approximation of the quasi-interpolant with a local support function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号