共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness. 相似文献
3.
This paper studies approximation capability to L2(Rd) functions of incremental constructive feedforward neural networks(FNN) with random hidden units.Two kinds of therelayered feedforward neural networks are considered:radial basis function(RBF) neural networks and translation and dilation invariant(TDI) neural networks.In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks,we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2(Rd) to any accuracy.Our result shows given any non-zero activation function g :R+→R and g(x Rd) ∈ L2(Rd) for RBF hidden units,or any non-zero activation function g(x) ∈ L2(Rd) for TDI hidden units,the incremental network function fn with randomly generated hidden units converges to any target function in L2(Rd) with probability one as the number of hidden units n→∞,if one only properly adjusts the weights between the hidden units and output unit. 相似文献
5.
6.
Under fairly mild measurability and integrability conditions on operator-valued kernels, boundedness results for integral operators on Bochner spaces Lp(X) are given. In particular, these results are applied to convolutions operators. 相似文献
7.
《Journal of Approximation Theory》2003,120(2):185-216
The density of polynomials is straightforward to prove in Sobolev spaces Wk,p((a,b)), but there exist only partial results in weighted Sobolev spaces; here we improve some of these theorems. The situation is more complicated in infinite intervals, even for weighted Lp spaces; besides, in the present paper we have proved some other results for weighted Sobolev spaces in infinite intervals. 相似文献
8.
《Mathematical Methods in the Applied Sciences》2018,41(2):544-558
We prove the existence and uniqueness of solution to the nonhomogeneous degenerate elliptic PDE of second order with boundary data in weighted Orlicz‐Slobodetskii space. Our goal is to consider the possibly general assumptions on the involved constraints: the class of weights, the boundary data, and the admitted coefficients. We also provide some estimates on the spectrum of our degenerate elliptic operator. 相似文献
9.
10.
In this paper, we propose and analyze the numerical algorithms for fast solution of periodic elliptic problems in random media in , . Both the two-dimensional (2D) and three-dimensional (3D) elliptic problems are considered for the jumping equation coefficients built as a checkerboard type configuration of bumps randomly distributed on a large , or lattice, respectively. The finite element method discretization procedure on a 3D uniform tensor grid is described in detail, and the Kronecker tensor product approach is proposed for fast generation of the stiffness matrix. We introduce tensor techniques for the construction of the low Kronecker rank spectrally equivalent preconditioner in a periodic setting to be used in the framework of the preconditioned conjugate gradient iteration. The discrete 3D periodic Laplacian pseudo-inverse is first diagonalized in the Fourier basis, and then the diagonal matrix is reshaped into a fully populated third-order tensor of size . The latter is approximated by a low-rank canonical tensor by using the multigrid Tucker-to-canonical tensor transform. As an example, we apply the presented solver in numerical analysis of stochastic homogenization method where the 3D elliptic equation should be solved many hundred times, and where for every random sampling of the equation coefficient one has to construct the new stiffness matrix and the right-hand side. The computational characteristics of the presented solver in terms of a lattice parameter and the grid-size, , in both 2D and 3D cases are illustrated in numerical tests. Our solver can be used in various applications where the elliptic problem should be solved for a number of different coefficients for example, in many-particle dynamics, protein docking problems or stochastic modeling. 相似文献
11.
研究多维Cardaliguet-Eurrard型神经网络算子的逼近问题.分别给出该神经网络算子逼近连续函数与可导函数的速度估计,建立了Jackson型不等式. 相似文献
12.
Feilong Cao Huazhong Wang Shaobo Lin 《Mathematical Methods in the Applied Sciences》2011,34(15):1888-1895
Compared with planar hyperplane, fitting data on the sphere has been an important and an active issue in geoscience, metrology, brain imaging, and so on. In this paper, with the help of the Jackson‐type theorem of polynomial approximation on the sphere, we construct spherical feed‐forward neural networks to approximate the continuous function defined on the sphere. As a metric, the modulus of smoothness of spherical function is used to measure the error of the approximation, and a Jackson‐type theorem on the approximation is established. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
13.
Guillaume Bal 《偏微分方程通讯》2016,41(12):1839-1859
We study the stochastic homogenization and obtain a random fluctuation theory for semilinear elliptic equations with a rapidly varying random potential. To first order, the effective potential is the average potential and the nonlinearity is not affected by the randomness. We then study the limiting distribution of the properly scaled homogenization error (random fluctuations) in the space of square integrable functions, and prove that the limit is a Gaussian distribution characterized by homogenized solution, the Green’s function of the linearized equation around the homogenized solution, and by the integral of the correlation function of the random potential. These results enlarge the scope of the framework that we have developed for linear equations to the class of semilinear equations. 相似文献
14.
15.
Erol Gelenbe
Andreas Stafylopatis
《Applied Mathematical Modelling》1991,15(10):534-541We define a simple form of homogeneous neural network model whose characteristics are expressed in terms of probabilistic assumptions. The networks considered operate in an asynchronous manner and receive the influence of the environment in the form of external stimulations. The operation of the network is described by means of a Markovian process whose steady-statesolution yields several global measures of the network's activity. Three different types of external stimulations are investigated, which represent possible input mechanisms. The analytical results obtained concern the macroscopic viewpoint and provide a quick insight into the structure of the network's behavior. 相似文献
16.
We prove a theorem concerning the approximation of generalized bandlimited mul-tivariate functions by deep ReLU networks for which the curse of the dimensionality is overcome.Our theorem is based on a result by Maurey and on the ability of deep ReLU networks to approximate Chebyshev polynomials and analytic functions efficiently. 相似文献
17.
Lets1 be an integer andW be the class of all functions having integrable partial derivatives on [0, 1]
s
. We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassigned>0 to each function inW. We prove that this number cannot be
if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for any>0, a network with
neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for otherL
p
norms.The research of this author was supported by NSF Grant # DMS 92-0698.The research of this author was supported, in part, by AFOSR Grant #F49620-93-1-0150 and by NSF Grant #DMS 9404513. 相似文献
18.
L. C. W. Dixon 《Journal of Optimization Theory and Applications》2001,111(3):489-500
The generalization problem considered in this paper assumes that a limited amount of input and output data from a system is available, and that from this information an estimate of the output produced by another input is required. The ideas arose in the study of neural networks, but apply equally to any approximation approach. The main result is that the type of neural network to be used for generalization should be determined by the prior knowledge about the nature of the output from the system. Without such information, either of two networks matching the training data is equally likely to be the better at estimating the output generated by the same system at a new input. Therefore, the search for an optimum generalization network for use on all problems is inappropriate.For both (0, 1) and accurate real outputs, it is shown that simple approximations exist that fit the data, so these will be equally likely to generalize better than more sophisticated networks, unless prior knowledge is available that excludes them. For noisy real outputs, it is shown that the standard least squares approach forces the neural network to approximate an incorrect process; an alternative approach is outlined, which again is much easier to learn and use. 相似文献
19.
In this paper, we study approximation by radial basis functions including Gaussian, multiquadric, and thin plate spline functions, and derive order of approximation under certain conditions. Moreover, neural networks are also constructed by wavelet recovery formula and wavelet frames. 相似文献
20.
研究球面神经网络的构造与逼近问题.利用球面广义的de la Vallee Poussin平均、球面求积公式及改进的单变量Cardaliaguet-Euvrard神经网络算子,构造具logistic激活函数的单隐层前向网络,并给出了Jackson型误差估计. 相似文献