首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we investigate the relation between the rate of convergence for the derivatives of the combinations of Baskakov operators and the smoothness for the derivatives of the functions approximated. We give some direct and inverse results on pointwise simultaneous approximation by the combinations of Baskakov operators. We also give a new equivalent result on pointwise approximation by these operators.  相似文献   

2.
In this paper, a family of interpolation neural network operators are introduced. Here, ramp functions as well as sigmoidal functions generated by central B-splines are considered as activation functions. The interpolation properties of these operators are proved, together with a uniform approximation theorem with order, for continuous functions defined on bounded intervals. The relations with the theory of neural networks and with the theory of the generalized sampling operators are discussed.  相似文献   

3.
We introduce a new sequence of linear positive operators by combining the Brenke polynomials and the Srivastava‐Gupta–type operators defined by Srivastava‐Gupta. obtain the moments of the operators and present some classical and statistical approximation properties by means of Korovkin results. Next, we estimate a global result, which includes the Voronovskaya‐type asymptotic formula, local approximation, error estimation in terms of weighted modulus of continuity, and for functions in a Lipschitz‐type space. Lastly, we estimate the rate of approximation for functions with derivatives of bounded variation.  相似文献   

4.
In this paper, we introduce a new type neural networks by superpositions of a sigmoidal function and study its approximation capability. We investigate the multivariate quantitative constructive approximation of real continuous multivariate functions on a cube by such type neural networks. This approximation is derived by establishing multivariate Jackson-type inequalities involving the multivariate modulus of smoothness of the target function. Our networks require no training in the traditional sense.  相似文献   

5.
虞旦盛  周平 《数学学报》2016,59(5):623-638
首先,引入一种由斜坡函数激发的神经网络算子,建立了其对连续函数逼近的正、逆定理,给出了其本质逼近阶.其次,引入这种神经网络算子的线性组合以提高逼近阶,并且研究了这种组合的同时逼近问题.最后,利用Steklov函数构造了一种新的神经网络算子,建立了其在L~p[a,b]空间逼近的正、逆定理.  相似文献   

6.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.  相似文献   

7.
In this paper we introduce new approximation operators for univariate set-valued functions with general compact images in Rn. We adapt linear approximation methods for real-valued functions by replacing linear combinations of numbers with new metric linear combinations of finite sequences of compact sets, thus obtaining "metric analogues" of these operators for set-valued functions. The new metric linear combination extends the binary metric average of Artstein to several sets and admits any real coefficients. Approximation estimates for the metric analogue operators are derived. As examples we study metric Bernstein operators, metric Schoenberg operators, and metric polynomial interpolants.  相似文献   

8.
In the past decades, various neural network models have been developed for modeling the behavior of human brain or performing problem-solving through simulating the behavior of human brain. The recurrent neural networks are the type of neural networks to model or simulate associative memory behavior of human being. A recurrent neural network (RNN) can be generally formalized as a dynamic system associated with two fundamental operators: one is the nonlinear activation operator deduced from the input-output properties of the involved neurons, and the other is the synaptic connections (a matrix) among the neurons. Through carefully examining properties of various activation functions used, we introduce a novel type of monotone operators, the uniformly pseudo-projectionanti-monotone (UPPAM) operators, to unify the various RNN models appeared in the literature. We develop a unified encoding and stability theory for the UPPAM network model when the time is discrete. The established model and theory not only unify but also jointly generalize the most known results of RNNs. The approach has lunched a visible step towards establishment of a unified mathematical theory of recurrent neural networks.  相似文献   

9.
In this paper,some equivalent theorems on simultaneous approximation for combinations of Gamma operators by weighted moduli of smoothness ωr(4)λ(f,t)w(4)s(O≤λ≤1)are given.The relation between derivatives of combinations of Gamma operators and smoothness of derivatives of functions is also investigated.  相似文献   

10.
In this paper we introduce the Favard operators of max-product type to study the uniform approximation of functions with exponential growth. We introduce a new suitable modulus of continuity and we estimate the rate of approximation in terms of this modulus. We obtain a better rate of approximation than the corresponding positive linear operators.  相似文献   

11.
本文给出了Szász-Mirakjan算子线性组合的点态逼近定理。另外,还研究了Szász-Mirakjan算子高阶导数与所逼近函数光滑性之间的关系。  相似文献   

12.
Recently, Li [16] introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights, and obtained the upper and lower bound estimations on the approximation accuracy of the FNNs, for continuous function defined on bounded intervals. In the present paper, we point out that there are some errors both in the definitions of the FNNs and in the proof of the upper estimations in [16]. By using new methods, we also give right approximation rate estimations of the approximation by Li’s neural networks.  相似文献   

13.
In this paper we study the mixed summation-integral type operators having Szász and Beta basis functions. We extend the study of Gupta and Noor [V. Gupta, M.A. Noor, Convergence of derivatives for certain mixed Szász-Beta operators, J. Math. Anal. Appl. 321 (1) (2006) 1-9] and obtain some direct results in local approximation without and with iterative combinations. In the last section are established direct global approximation theorems.  相似文献   

14.
We obtain the complete asymptotic expansion of the image functions of Müller’s Gamma operators and of their derivatives. All expansion coefficients are explicitly calculated. Moreover, we study linear combinations of Gamma operators having a better degree of approximation than the operators themselves. Using divided differences we define general classes of linear combinations of which special cases were recently introduced and investigated by other authors.  相似文献   

15.
In this study, we introduce the Durrmeyer type Jakimoski–Leviatan operators and examine their approximation properties. We study the local approximation properties of these operators. Further, we investigate the convergence of these operators in a weighted space of functions and obtain the approximation properties. Furthermore, we give a Voronovskaja type theorem for the our new operators. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
本文利用加权Ditzian-Totik光滑模证明Bernstein型算子的线性组合加权逼近阶估计和等价定理;同时,研究加Jacobi权下Benstein型算子的高阶导数与所逼近函数光滑性之间的关系.  相似文献   

17.
In this paper,the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions.Using the modulus of continuity of function as a metric,...  相似文献   

18.
In the present paper we introduce a generalization of positive linear operators and obtain its Korovkin type approximation properties. The rates of convergence of this generalization is also obtained by means of modulus of continuity and Lipschitz type maximal functions. The second purpose of this paper is to obtain weighted approximation properties for the generalization of positive linear operators defined in this paper. Also we obtain a differential equation so that the second moment of our operators is a particular solution of it. Lastly, some Voronovskaja type asymptotic formulas are obtained for Meyer-König and Zeller type and Bleimann, Butzer and Hahn type operators.  相似文献   

19.
The aim of this paper is to investigate approximation operators with logarithmic sigmoidal function of a class of two neural networks weights and a class of quasi-interpolation operators. Using these operators as approximation tools, the upper bounds of estimate errors are estimated for approximating continuous functions.  相似文献   

20.
Bernstein型算子同时逼近误差   总被引:1,自引:0,他引:1       下载免费PDF全文
该文证明了C[0,1]空间中的函数及其导数可以用Bernstein算子的线性组合同时逼近,得到逼近的正定理与逆定理.同时,也证明了Bernstein算子导数与函数光滑性之间的一个等价关系.该文所获结果沟通了Bernstein算子同时逼近的整体结果与经典的点态结果之间的关系.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号