首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 718 毫秒
1.
There have been many studies on the dense theorem of approximation by radial basis feedforword neural networks, and some approximation problems by Gaussian radial basis feedforward neural networks(GRBFNs)in some special function space have also been investigated. This paper considers the approximation by the GRBFNs in continuous function space. It is proved that the rate of approximation by GRNFNs with n~d neurons to any continuous function f defined on a compact subset K(R~d)can be controlled by ω(f, n~(-1/2)), where ω(f, t)is the modulus of continuity of the function f .  相似文献   

2.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.  相似文献   

3.
L^p approximation problems in system identification with RBF neural networks are investigated. It is proved that by superpositions of some functions of one variable in L^ploc(R), one can approximate continuous functionals defined on a compact subset of L^P(K) and continuous operators from a compact subset of L^p1 (K1) to a compact subset of L^p2 (K2). These results show that if its activation function is in L^ploc(R) and is not an even polynomial, then this RBF neural networks can approximate the above systems with any accuracy.  相似文献   

4.
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.  相似文献   

5.
In 1991,Hornik proved that the collection of single hidden layer feedforward neural networks(SLFNs)with continuous,bounded,and non-constant activation functionσis dense in C(K)where K is a compact set in R~s(see Neural Networks,4(2),251-257(1991)).Meanwhile,he pointed out"Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem".This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere(a.e.)activation functionσon R,the collection of SLFNs is dense in C(K)if and only ifσis un-constant a.e..  相似文献   

6.
1. IntroductionThe feedforward Multilayer Perceptron (MLP) is one of the most widely used artificial neural networks among other network models. Its field of application includes patternrecognition, identification and control of dynamic systems, system modeling and nonlinearprediction of time series, etc. [1--41 founded on its nonlinear function approximation capability. Research of this type of networks has been stimulated since the discovery andpopularization of the Backpropagation learnin…  相似文献   

7.
We study a class of deep neural networks with architectures that form a directed acyclic graph(DAG).For backpropagation defined by gradient descent with adaptive momentum,we show weights converge for a large class of nonlinear activation functions.'The proof generalizes the results of Wu et al.(2008)who showed convergence for a feed-forward network with one hidden layer.For an example of the effectiveness of DAG architectures,we describe an example of compression through an AutoEncoder,and compare against sequential feed-forward networks under several metrics.  相似文献   

8.
Discrete-time version of the bi-directional Cohen-Grossberg neural network is stud-ied in this paper. Some sufficient conditions are obtained to ensure the global exponen-tial stability of such networks with discrete time based on Lyapunov method. These results do not require the symmetry of the connection matrix and the monotonicity, boundedness and differentiability of the activation function.  相似文献   

9.
In this paper,a novel and effective approach to impulsive synchronization analysis excited by parameter white-noise of neural networks is investigated using the nonlinear operator named the generalized Dahlquist constant.The proposed approach offers a design procedure for impulsive synchronization of a large class of neural networks.Numerical simulations,where the theoretical results are applied to typical neural networks with and without delayed item,demonstrate the effectiveness and feasibility of the proposed technique.  相似文献   

10.
In this paper a canonical neural network with adaptively changing synaptic weights and activation function parameters is presented to solve general nonlinear programming problems. The basic part of the model is a sub-network used to find a solution of quadratic programming problems with simple upper and lower bounds. By sequentially activating the sub-network under the control of an external computer or a special analog or digital processor that adjusts the weights and parameters, one then solves general nonlinear programming problems. Convergence proof and numerical results are given.  相似文献   

11.
Let m be an integer and T be an m-linear Calderón-Zygmund operator, u, v1,..., vm be weights. In this paper, the authors give some sufficient conditions on the weights (u, vk) with 1 ≤ k ≤ m, such that T is bounded from Lp1(Rn, v1) × ··· × Lpm(Rn, vm) to Lp,∞(Rn, u).  相似文献   

12.
快速自底向上构造神经网络的方法   总被引:2,自引:0,他引:2  
介绍了一种构造神经网络的新方法 .常规的瀑流关联 (Cascade-Correlation)算法起始于最小网络(没有隐含神经元 ) ,然后逐一地往网络里增加新隐含神经元并训练 ,结束于期望性能的获得 .我们提出一种与构造算法 (Constructive Algorithm)相关的快速算法 ,这种算法从适当的初始网络结构开始 ,然后不断地往网络里增加新的神经元和相关权值 ,直到满意的结果获得为止 .实验证明 ,这种快速方法与以往的常规瀑流关联方法相比 ,有几方面优点 :更好的分类性能 ,更小的网络结构和更快的学习速度 .  相似文献   

13.
ANEWREGULARITYCLASSFORTHENAVIER-STOKESEQUATIONSINIR~n¥H.BEIRaODAVEIGA(DepotmentofMathematics,PisaUniversity,Pisa,Italy)Abstra?..  相似文献   

14.
Let a(x)=(a_(ij)(x)) be a uniformly continuous, symmetric and matrix-valued function satisfying uniformly elliptic condition, p(t, x, y) be the transition density function of the diffusion process associated with the Diriehlet space (, H_0~1 (R~d)), where(u, v)=1/2 integral from n=R~d sum from i=j to d(u(x)/x_i v(x)/x_ja_(ij)(x)dx).Then by using the sharpened Arouson's estimates established by D. W. Stroock, it is shown that2t ln p(t, x, y)=-d~2(x, y).Moreover, it is proved that P_y~6 has large deviation property with rate functionI(ω)=1/2 integral from n=0 to 1<(t), α~(-1)(ω(t)),(t)>dtas s→0 and y→x, where P_y~6 denotes the diffusion measure family associated with the Dirichlet form (ε, H_0~1(R~d)).  相似文献   

15.
Let T be a singular integral operator, and let 0 < α < 1. If t > 0 and the functions f and Tf are both integrable, then there exists a function $g \in B_{Lip_\alpha } (ct)$ such that $\left\| {f - g} \right\|_{L^1 } \leqslant Cdist_{L^1 } (f,B_{Lip_\alpha } (t))$ and $\left\| {Tf - Tg} \right\|_{L^1 } \leqslant C\left\| {f - g} \right\|_{L^1 } + dist_{L^1 } (Tf,B_{Lip_\alpha } (t)).$ . (Here B X (τ) is the ball of radius τ and centered at zero in the space X; the constants C and c do not depend on t and f.) The function g is independent of T and is constructed starting with f by a nearly algorithmic procedure resembling the classical Calderón-Zygmund decomposition.  相似文献   

16.
Let X be a Banach space and Ф be an Orlicz function. Denote by L^Ф(I,X) the space of X-valued (I)-integrable functions on the unit interval I equipped with the Luxemburg norm. For f1,f2,... ,fm ∈ L^Ф(I,X), a distance formula distv(f1,f2,... ,fm,L^Ф(I,G)) is presented, where G is a close subspace of X. Moreover, some existence and characterization results concerning the best simultaneous approximation of L^Ф (I, G) in L^Ф (I, X) axe given.  相似文献   

17.
Let ${V_k}^{+\infinity}_{k=-\infinity}$ be a multiresolution analysis generated by a function $\phi(x)\in L^2(R^2)$. Under this multiresolution framework the key point for studying wavelet decompositions in $L^2(R^2)$ is to study the properties of Wo which is the orthogonal complement of $V_0$ in $V_1:V_1=V_0\oplus W_0$.In this paper the author studies the structure of W_0 and furthermore shows that a box spline of three directions can generate a wavelet decomposition of $L^2(R^2)$.  相似文献   

18.
In this paper, we consider the Liouville-type theorem for stable solutions of the following Kirchhoff equation ■,where M(t) = a + bt~θ, a 0, b, θ≥ 0, θ = 0 if and only if b = 0. N ≥ 2, q 0 and the nonnegative function g(x) ∈ L_(loc)~1(R~N). Under suitable conditions on g(x), θ and q, we investigate the nonexistence of positive stable solution for this problem.  相似文献   

19.
In this paper, we give some characterizations of almost completely regular spaces and c-semistratifiable spaces(CSS) by semi-continuous functions. We mainly show that:(1)Let X be a space. Then the following statements are equivalent:(i) X is almost completely regular.(ii) Every two disjoint subsets of X, one of which is compact and the other is regular closed, are completely separated.(iii) If g, h : X → I, g is compact-like, h is normal lower semicontinuous, and g ≤ h, then there exists a continuous function f : X → I such that g ≤ f ≤ h;and(2) Let X be a space. Then the following statements are equivalent:(a) X is CSS;(b) There is an operator U assigning to a decreasing sequence of compact sets(Fj)j∈N,a decreasing sequence of open sets(U(n,(Fj)))n∈N such that(b1) Fn■U(n,(Fj)) for each n ∈ N;(b2)∩n∈NU(n,(Fj)) =∩n∈NFn;(b3) Given two decreasing sequences of compact sets(Fj)j∈N and(Ej)j∈N such that Fn■Enfor each n ∈ N, then U(n,(Fj))■U(n,(Ej)) for each n ∈ N;(c) There is an operator Φ : LCL(X, I) → USC(X, I) such that, for any h ∈ LCL(X, I),0 Φ(h) h, and 0 Φ(h)(x) h(x) whenever h(x) 0.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号