首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.  相似文献   

2.
给出了Bernstein-Kantorovich算子的导数和光滑模之间的关系及它们的线性组合的逼近等价定理.  相似文献   

3.
In this paper, we study approximation by radial basis functions including Gaussian, multiquadric, and thin plate spline functions, and derive order of approximation under certain conditions. Moreover, neural networks are also constructed by wavelet recovery formula and wavelet frames.  相似文献   

4.
In this paper, a constructive theory is developed for approximating functions of one or more variables by superposition of sigmoidal functions. This is done in the uniform norm as well as in the $L^p$ norm. Results for the simultaneous approximation, with the same order of accuracy, of a function and its derivatives (whenever these exist), are obtained. The relation with neural networks and radial basis functions approximations is discussed. Numerical examples are given for the purpose of illustration.  相似文献   

5.
研究Bernstein-Sikkema算子的逼近问题,得到强型正定理和弱型逆定理,改进了文献[1]的结果  相似文献   

6.
This paper concerns about the approximation by a class of positive exponential type multiplier operators on the unit sphere Sn of the (n + 1)- dimensional Euclidean space for n ≥2. We prove that such operators form a strongly continuous contraction semigroup of class (l0) and show the equivalence between the approximation errors of these operators and the K-functionals. We also give the saturation order and the saturation class of these operators. As examples, the rth Boolean of the generalized spherical Abel-Poisson operator +Vt^γ and the rth Boolean of the generalized spherical Weierstrass operator +Wt^k for integer r ≥ 1 and reals γ, k∈ (0, 1] have errors ||+r Vt^γ- f||X ω^rγ(f, t^1/γ)X and ||+rWt^kf - f||X ω^2rk(f, t^1/(2k))X for all f ∈ X and 0 ≤t ≤2π, where X is the Banach space of all continuous functions or all L^p integrable functions, 1 ≤p ≤+∞, on S^n with norm ||·||X, and ω^s(f,t)X is the modulus of smoothness of degree s 〉 0 for f ∈X. Moreover, +r^Vt^γ and +rWt^k have the same saturation class if γ= 2k.  相似文献   

7.
In this paper, we introduce a type of approximation operators of neural networks with sigmodal functions on compact intervals, and obtain the pointwise and uniform estimates of the ap- proximation. To improve the approximation rate, we further introduce a type of combinations of neurM networks. Moreover, we show that the derivatives of functions can also be simultaneously approximated by the derivatives of the combinations. We also apply our method to construct approximation operators of neural networks with sigmodal functions on infinite intervals.  相似文献   

8.
In this paper, we investigate the relation between the rate of convergence for the derivatives of the combinations of Baskakov operators and the smoothness for the derivatives of the functions approximated. We give some direct and inverse results on pointwise simultaneous approximation by the combinations of Baskakov operators. We also give a new equivalent result on pointwise approximation by these operators.  相似文献   

9.
It is demonstrated, through theory and examples, how it is possible to construct directly and noniteratively a feedforward neural network to approximate arbitrary linear ordinary differential equations. The method, using the hard limit transfer function, is linear in storage and processing time, and the L2 norm of the network approximation error decreases quadratically with the increasing number of hidden layer neurons. The construction requires imposing certain constraints on the values of the input, bias, and output weights, and the attribution of certain roles to each of these parameters.

All results presented used the hard limit transfer function. However, the noniterative approach should also be applicable to the use of hyperbolic tangents, sigmoids, and radial basis functions.  相似文献   


10.
Starting from the equivalence between the Ditzian–Totik modulus and , where , in this article large classes of functions are introduced for which the modulus can be easily calculated. As a consequence, very good estimates for the bestapproximation are obtained. The attempts to estimate or calculate themodulus can be a very intricateproblem.  相似文献   

11.
Approximation properties of multivariate wavelets   总被引:12,自引:0,他引:12  
Wavelets are generated from refinable functions by using multiresolution analysis. In this paper we investigate the approximation properties of multivariate refinable functions. We give a characterization for the approximation order provided by a refinable function in terms of the order of the sum rules satisfied by the refinement mask. We connect the approximation properties of a refinable function with the spectral properties of the corresponding subdivision and transition operators. Finally, we demonstrate that a refinable function in provides approximation order .

  相似文献   


12.
The trial and error process of calculating the characteristics of an air vessel suitable to protect a rising main against the effects of hydraulic transients has proved to be cumbersome for the design engineer. The own experience and sets of charts, which can be found in the literature, can provide some help. The aim of this paper is to present a neural network allowing instantaneous and direct calculation of air and vessel volumes from the system parameters. This neural network has been implemented in the hydraulic transient simulation package DYAGATS.  相似文献   

13.
本文利用ω2rφλ(f,t)代替ωrφλ(f,t)给出了Szász-Kantorovich算子线性组合同时逼近的估计。  相似文献   

14.
It is known that iffWkp, thenωm(ft)pm−1(f′, t)p…. Its inverse with any constants independent offis not true in general. Hu and Yu proved that the inverse holds true for splinesSwith equally spaced knots, thusωm(St)pm−1(S′, t)pt2ωm−2(S″, t)p…. In this paper, we extend their results to splines with any given knot sequence, and further to principal shift-invariant spaces and wavelets under certain conditions. Applications are given at the end of the paper.  相似文献   

15.
In this paper, a family of interpolation neural network operators are introduced. Here, ramp functions as well as sigmoidal functions generated by central B-splines are considered as activation functions. The interpolation properties of these operators are proved, together with a uniform approximation theorem with order, for continuous functions defined on bounded intervals. The relations with the theory of neural networks and with the theory of the generalized sampling operators are discussed.  相似文献   

16.
In order to obtain much faster convergence, Miiller introduced the left Gamma quasi- interpolants and obtained an approximation equivalence theorem in terms of 2r wφ (f,t)p. Cuo extended the MiiUer's results to wφ^24 (f, t)∞. In this paper we improve the previous results and give a weighted approximation equivalence theorem.  相似文献   

17.
球面上Peetre K-模和最佳逼近   总被引:1,自引:0,他引:1  
Berens  H 李落清 《数学学报》1995,38(5):589-599
本文研究了球面上三种PeetreK-模与最佳逼近的关系,建立起它们之间的若干强型和弱型不等式.此外,还讨论了K-模与光滑模的等价性.  相似文献   

18.
研究了二元函数用一种组合型的三角插值多项式算子逼近的问题.借助连续模这一工具,给出了这类三角插值多项式在Orlicz空间内的逼近定理.  相似文献   

19.
In this paper the best polynomial approximation in terms of the system of Faber-Schauder functions in the spaceC p [0, 1] is studied. The constant in the estimate of Jackson’s inequality for the best approximation in the metric ofC p [0, 1] and the estimate of the modulus of continuity ω1−1/p are refined. Translated fromMatematicheskie Zametki, Vol. 62, No. 3, pp. 363–371, September, 1997. Translated by N. K. Kulman  相似文献   

20.
Lets1 be an integer andW be the class of all functions having integrable partial derivatives on [0, 1] s . We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassigned>0 to each function inW. We prove that this number cannot be if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for any>0, a network with neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for otherL p norms.The research of this author was supported by NSF Grant # DMS 92-0698.The research of this author was supported, in part, by AFOSR Grant #F49620-93-1-0150 and by NSF Grant #DMS 9404513.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号