首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
对[0,1]上的L—可积函数ф及α>0定义下列B—D—B算子;本文研究了M_(na)(ф,x)当α>0时,在L_P(0,1](1≤p<+∞)的一致逼近;当α≥1时在L_P[O,1]及L~1_P[0,1]逼近度的量化估计。作者在文[4]中定义了B—D—B算子:其中f_(nk)(X)称为Bézeief基函数文[4]研究的是B—D—B称子在C[0,1]空间中的逼近性质,本文继续[4]的工作,专研究这个算子在L_P[0,1](1≤P<+∞)的逼近性质,证明了M_(na)(ф X)当α>0时在L_P[0,1]中为一致逼近,并得到了当α≥1时在L_P[0,1]及L~1_P[0,1]中逼近度的量化估计。  相似文献   

2.
设p≥1,且A、B是Hilbert空间上两个正算子,T.Furuta给出若A≥B>0,那么对任意t∈[0,1]有,G(r,s)=A-r/2{Ar/2(A-t/2BpA-t/2)sAr/2}1-t+r/(p-t)+srA-r/2是关于r,s在r≥t及s≥1上单调递减的,我们给出该结果可以推广到多个算子的情形.  相似文献   

3.
本文运用概率工具,得出BBH算子对在[0,∞)的任一有限子区间上具有P≥1次有界交差函数的逼近度估式,并讨论了对导函数为P≥1次有界变差函数时的逼近问题与渐近公式.  相似文献   

4.
考虑了Kantorovich-Vertesi有理插值型算子L^*n,s(f,X,x)对L^p[-1,1](1≤p≤∞)空间函数逼近的Jackson型估计。并获得了如下逼近阶:‖L^*n,s(f,X,x)-f(x)‖L^p[-1,1]≤Cp,sw(f,1/n 2)L^p[-1,1] (s>2)。  相似文献   

5.
本文在具有有界逼近性质的banach空间上全面地研究了P≥1一阶拟总体紧、广义总体紧(M)类拟总体紧算子列的相互关系。  相似文献   

6.
从微分算子角度理解核函数空间,借助经典Fourier变换研究核函数逼近问题.应用Fourier乘子算子和算子半群定义了一种光滑模,证明其与一种基于微分算子的K-泛函的等价性,由此给出了刻画核函数逼近收敛性的Jackson不等式.进一步证明,如果微分算子为Riesz势算子或Bessel势算子,逼近的收敛性可以转化为卷积算子逼近.特别地,给出了再生核Hilbert空间逼近的一种上界估计.  相似文献   

7.
李浩 《数学学报》1985,28(2):244-248
<正> 本文中 H 表示复 Hilbert 空间,<·,·>表示 H 中元对的内积,(H,H)表示 H 中的有界线性算子形成的 Banach 空间.如果 P∈(H,H),≥0,A_x∈H,称 P 为非负算子,记 P≥0.任取 A∈(H,H),定义δ(A)=inf{‖A-P‖,P≥0,P∈(H,H)},如果 P_0∈(H,H),P_0≥0,‖A-P_0‖=δ(A),称 P_0是 A 的非负逼近.文[1]首先提出并研究了非负逼近问题.本文中未说明的符号与[2]相同.  相似文献   

8.
讨论了一种神经网络算子f_n(x)=sum from -n~2 to n~2 (f(k/n))/(n~α)b(n~(1-α)(x-k/n)),对f(x)的逼近误差|f_n(x)-f(x)|的上界在f(x)为连续和N阶连续可导两种情形下分别给出了该网络算子逼近的Jackson型估计.  相似文献   

9.
在这篇文章中,我们在Banach空间中引进并研究了一类我们称之为p≥1—阶拟总体列紧算子,(p th-order quasi-collectively compact operator),这类算子最初的简单论述己在我们的报告[1]中提出,拟总体列紧算子类可视为是对1971年,由Anselone所引进和研究的那类总体列紧算子的一种特殊扰动的结果,将这类拟总体列紧算子理论应用于线性迁移问题,可建立起求解积分—微分Boltzmann方程某些近似方法,如离散纵标法(Discrete-Ordinaes Methods)的统一的理论基础。本文所论述的拟总体列紧算子逼近理论的应用,结合我们的工作,系统地给出了线性迁移理论中,高维离散纵标法的种种逼近,包括谱逼近的定性的理论阐述,从而回答了《第四届国际迁移理论会议》上所提出的有关离散纵标法的问题。  相似文献   

10.
某些正线性算子对有界变差函数的点态逼近度   总被引:5,自引:0,他引:5  
1 引言 R.Bojanic在文献[1]研中究了Fourier算子对有界变差函数的点态逼近度,1983年Cheng Fuhua在他的博士论文中研究了Bernstein算子对BV函数的点态逼近度。本文将给出一般正线性算子对有界变差函数的点态逼近度。作为例子,我们给出Bernstein算子和Kantorovitch算子对有界变差函数的点态逼近度。应当指出,文献[2]  相似文献   

11.
Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.  相似文献   

12.
We obtain a sharp lower bound estimate for the approximation error of a continuous function by single hidden layer neural networks with a continuous activation function and weights varying on two fixed directions. We show that for a certain class of activation functions this lower bound estimate turns into equality. The obtained result provides us with a method for direct computation of the approximation error. As an application, we give a formula, which can be used to compute instantly the approximation error for a class of functions having second order partial derivatives.  相似文献   

13.
Recently, Li [16] introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights, and obtained the upper and lower bound estimations on the approximation accuracy of the FNNs, for continuous function defined on bounded intervals. In the present paper, we point out that there are some errors both in the definitions of the FNNs and in the proof of the upper estimations in [16]. By using new methods, we also give right approximation rate estimations of the approximation by Li’s neural networks.  相似文献   

14.
The relationship between the rate of approximation of a monotone function by step functions (with an increasing number of values) and the Hausdorff dimension of the corresponding Lebesgue–Stieltjes measure is studied. An upper bound on the dimension is found in terms of the approximation rate, and it is shown that a lower bound cannot be constructed in these terms.  相似文献   

15.
We establish an upper bound for the error of the best approximation of the first order differentiation operator by linear bounded operators on the set of twice differentiable functions in the space L 2 on the half-line. This upper bound is close to a known lower bound and improves the previously known upper bound due to E. E. Berdysheva. We use a specific operator that is introduced and studied in the paper.  相似文献   

16.
The spherical approximation between two nested reproducing kernels Hilbert spaces generated from different smooth kernels is investigated. It is shown that the functions of a space can be approximated by that of the subspace with better smoothness. Furthermore, the upper bound of approximation error is given.  相似文献   

17.
In this study, the methods for computing the exact bounds and the confidence bounds of the dynamic response of structures subjected to uncertain-but-bounded excitations are discussed. Here the Euclidean norm of the nodal displacement is considered as the measurement of the structural response. The problem of calculating the exact lower bound, the confidence (outer) approximation and the inner approximation of the exact upper bound, and the exact upper bound of the dynamic response are modeled as three convex QB (quadratic programming with box constraints) problems and a problem of quadratic programming with bivalent constraints at each time point, respectively. Accordingly, the DCA (difference of convex functions algorithm) and the vertex method are adopted to solve the above convex QB problems and the quadratic programming problem with bivalent constraints, respectively. Based on the inner approximation and the outer approximation of the exact upper bound, the error between the confidence upper bound and the exact upper bound of dynamic response could be yielded. Specially, we also investigate how to obtain the confidence bound of the dynamic response of structures subjected to harmonic excitations with uncertain-but-bounded excitation frequencies. Four examples are given to show the efficiency and accuracy of the proposed method.  相似文献   

18.
An optimal algorithm for approximating bandlimited functions from localized sampling is established. Several equivalent formulations for the approximation error of the optimal algorithm are presented and its upper and lower bound estimates for the univariate case are provided. The estimates show that the approximation error decays exponentially (but not faster) as the number of localized samplings increases. As a consequence of these results, we obtain an upper bound estimate for the eigenvalues of an integral operator that arises in the bandwidth problem.  相似文献   

19.
We give an algorithm which computes the approximation order of spaces of periodic piece-wise polynomial functions, given the degree, the smoothness and tesselation. The algorithm consists of two steps. The first gives an upper bound and the second a lower bound on the approximation order. In all known cases the two bounds coincide.  相似文献   

20.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号