首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
2.
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness.  相似文献   

3.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.  相似文献   

4.
Linear and nonlinear approximations to functions from Besov spaces B p, q σ ([0, 1]), σ > 0, 1 ≤ p, q ≤ ∞ in a wavelet basis are considered. It is shown that an optimal linear approximation by a D-dimensional subspace of basis wavelet functions has an error of order D -min(σ, σ + 1/2 ? 1/p) for all 1 ≤ p ≤ ∞ and σ > max(1/p ? 1/2, 0). An original scheme is proposed for optimal nonlinear approximation. It is shown how a D-dimensional subspace of basis wavelet functions is to be chosen depending on the approximated function so that the error is on the order of D for all 1 ≤ p ≤ ∞ and σ > max(1/p ? 1/2, 0). The nonlinear approximation scheme proposed does not require any a priori information on the approximated function.  相似文献   

5.
The density of polynomials is straightforward to prove in Sobolev spaces Wk,p((a,b)), but there exist only partial results in weighted Sobolev spaces; here we improve some of these theorems. The situation is more complicated in infinite intervals, even for weighted Lp spaces; besides, in the present paper we have proved some other results for weighted Sobolev spaces in infinite intervals.  相似文献   

6.
We formulate a general approximation problem involving reflexive and smooth Banach spaces, and give its explicit solution. Two applications are presented--the first is to the Bounded Completion Problem involving approximation of Hardy class functions, while the second involves the construction of minimal vectors and hyperinvariant subspaces of linear operators, generalizing the Hilbert space technique of Ansari and Enflo.

  相似文献   


7.
This article concerns the spectral analysis of matrix‐sequences which can be written as a non‐Hermitian perturbation of a given Hermitian matrix‐sequence. The main result reads as follows. Suppose that for every n there is a Hermitian matrix Xn of size n and that {Xn}nλf, that is, the matrix‐sequence {Xn}n enjoys an asymptotic spectral distribution, in the Weyl sense, described by a Lebesgue measurable function f; if Y n 2 = o ( n ) with ‖·‖2 being the Schatten 2 norm, then {Xn+Yn}nλf. In a previous article by Leonid Golinskii and the second author, a similar result was proved, but under the technical restrictive assumption that the involved matrix‐sequences {Xn}n and {Yn}n are uniformly bounded in spectral norm. Nevertheless, the result had a remarkable impact in the analysis of both spectral distribution and clustering of matrix‐sequences arising from various applications, including the numerical approximation of partial differential equations (PDEs) and the preconditioning of PDE discretization matrices. The new result considerably extends the spectral analysis tools provided by the former one, and in fact we are now allowed to analyze linear PDEs with (unbounded) variable coefficients, preconditioned matrix‐sequences, and so forth. A few selected applications are considered, extensive numerical experiments are discussed, and a further conjecture is illustrated at the end of the article.  相似文献   

8.
We establish the formulas of the left‐ and right‐hand Gâteaux derivatives in the Lorentz spaces Γp,w = {f: ∫0α (f **)p w < ∞}, where 1 ≤ p < ∞, w is a nonnegative locally integrable weight function and f ** is a maximal function of the decreasing rearrangement f * of a measurable function f on (0, α), 0 < α ≤ ∞. We also find a general form of any supporting functional for each function from Γp,w , and the necessary and sufficient conditions for which a spherical element of Γp,w is a smooth point of the unit ball in Γp,w . We show that strict convexity of the Lorentz spaces Γp,w is equivalent to 1 < p < ∞ and to the condition ∫0 w = ∞. Finally we apply the obtained characterizations to studies the best approximation elements for each function f ∈ Γp,w from any convex set K ? Γp,w (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
In this paper we establish a result about uniformly equivalent norms and the convergence of best approximant pairs on the unitary ball for a family of weighted Luxemburg norms with normalized weight functions depending on ε, when ε→ 0. It is introduced a general concept of Pade approximant and we study its relation with the best local quasi-rational approximant. We characterize the limit of the error for polynomial approximation. We also obtain a new condition over a weight function in order to obtain inequalities in Lp norm, which play an important role in problems of weighted best local Lp approximation in several variables.  相似文献   

10.
Besov as well as Sobolev spaces of dominating mixed smoothness are shown to be tensor products of Besov and Sobolev spaces defined on R. Using this we derive several useful characterizations from the one-dimensional case to the d-dimensional situation. Finally, consequences for hyperbolic cross approximations, in particular for tensor product splines, are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号