首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
L~p(R~n)中的神经网络逼近和系统识别   总被引:1,自引:0,他引:1  
本文主要研究函数的叠加对L~p(R~n)中的函数,L~p(R~n)上的非线性连续泛函及非线性连续算子的逼近这些问题与Sigma-Pi型神经网络逼近能力有关。  相似文献   

2.
通过证明和反例讨论黎曼积分、直接黎曼积分、黎曼-斯蒂尔切斯积分三者间的联系与区别.结果显示:若函数直接黎曼可积,则它黎曼可积,并且两者积分值相同,但反之不成立;若函数黎曼可积,则任意连续函数关于该函数不一定黎曼-斯蒂尔切斯可积.从讨论结果中还获得直接黎曼可积和黎曼可积各自的一个充分条件.  相似文献   

3.
本文给出了有限交换局部环R上无限线性群GL(R)=∪nGLnR的Sylowp-子群的形式.令M是有限交换局部环R的唯一极大理想,k=R/M为R的剩余类域.用X(k)表示k的特征,并假定P与x(k)互素.作者证明了:GL(R)的任一Sylowp-子群S或者同构于的可数无限直积与P(j)的无限直积的直积(当P≠2或P=2,X(k)β≡1(mod4))或者同构于Pi的无限直积与P(j)的无限直积的直积(当P=2,X(k)β≡3(mod4)),这里,只是GL(epi)R(分别地,GL(2ri)R)的Sylowp-子群,P(j))同构于P=∪i∈Ipi,I是可数集.  相似文献   

4.
本文利用有界函数黎曼可积的充要条件讨论了某些复合函数的黎曼可积性,给出了外函数黎曼可积,内函数连续,复合函数不一定黎曼可积的例子.  相似文献   

5.
Ⅱ非绝对积分及其性质§3L-积分、反常R-积分与KH-积分之关系前面我们已经看到,KH-可积函数类确比L-可积函数类广泛.例2.14说明了反常R-可积函数也是KH-可积的.这一节我们将从理论上直接证明反常R-积分、L-积分都为KH-积分的特殊情况.首...  相似文献   

6.
对黎曼可积函数列的极限函数的可积性进行讨论.运用黎曼积分自身的理论依次证明了:一致收敛函数列的极限函数的黎曼可积性,黎曼积分下的控制收敛定理和广义积分下的控制收敛定理。并给出了一些应用例子.  相似文献   

7.
黎曼积分的完备化   总被引:2,自引:0,他引:2  
综述了黎曼可积函数的基本特征,并指出黎曼可积函数列的极限运算在积分意义下是不封闭的.在构造了完备化空间之后,证明了该空间就是勒贝格可积函数空间,从而说明了黎曼积分的完备化形式是勒贝格积分.  相似文献   

8.
程磊  李静 《高等数学研究》2021,24(1):77-79,90
本文讨论原函数存在与黎曼可积之间的联系与区别,通过列举具体的函数来说明函数的原函数存在与黎曼可积是相互独立的概念,两者之间是互不蕴舍的关系.  相似文献   

9.
函数可积与原函数存在的关系   总被引:3,自引:0,他引:3  
详细探讨函数黎曼可积性与原函数存在性之间的相互关系,通过构造具体函数说明黎曼可积与原函数存在是相互独立形成的不同概念,它们之间是互不蕴含的关系.  相似文献   

10.
具有周期输入Hopfield型神经网络的全局渐近性质   总被引:4,自引:1,他引:3  
向兰  周进  刘曾荣  孙姝 《应用数学和力学》2002,23(12):1220-1226
在不假定非线性激励函数有界和可微的条件下,应用Mawhin的重合度理论及Liapunov函数法给出一类具有周期输入的Hopfield型神经网络存在周期解及其全局指数稳定的充分条件。  相似文献   

11.
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, and many other domains. Deep neural network architectures and computational issues have been well studied in machine learning. But there lacks a theoretical foundation for understanding the approximation or generalization ability of deep learning methods generated by the network architectures such as deep convolutional neural networks. Here we show that a deep convolutional neural network (CNN) is universal, meaning that it can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. This answers an open question in learning theory. Our quantitative estimate, given tightly in terms of the number of free parameters to be computed, verifies the efficiency of deep CNNs in dealing with large dimensional data. Our study also demonstrates the role of convolutions in deep CNNs.  相似文献   

12.
This paper presents a type of feedforward neural networks (FNNs), which can be used to approximately interpolate, with arbitrary precision, any set of distinct data in multidimensional Euclidean spaces. They can also uniformly approximate any continuous functions of one variable or two variables. By using the modulus of continuity of function as metric, the rates of convergence of approximate interpolation networks are estimated, and two Jackson-type inequalities are established.  相似文献   

13.
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness.  相似文献   

14.
This paper studies approximation capability to L2(Rd) functions of incremental constructive feedforward neural networks(FNN) with random hidden units.Two kinds of therelayered feedforward neural networks are considered:radial basis function(RBF) neural networks and translation and dilation invariant(TDI) neural networks.In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks,we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2(Rd) to any accuracy.Our result shows given any non-zero activation function g :R+→R and g(x Rd) ∈ L2(Rd) for RBF hidden units,or any non-zero activation function g(x) ∈ L2(Rd) for TDI hidden units,the incremental network function fn with randomly generated hidden units converges to any target function in L2(Rd) with probability one as the number of hidden units n→∞,if one only properly adjusts the weights between the hidden units and output unit.  相似文献   

15.
近年来,前向神经网络泛逼近的一致性分析一直为众多学者所重视。本文系统分析三层前向网络对于拟差值保序函数族的一致逼近性,其中,转换函数σ是广义Sigmoidal函数。并将此一致性结果用于建立一类新的模糊神经网络(FNN),即折线FNN.研究这类网络对于两个给定的模糊函数的逼近性,相关结论在分析折线FNN的泛逼近性时起关键作用。  相似文献   

16.
We obtain a sharp lower bound estimate for the approximation error of a continuous function by single hidden layer neural networks with a continuous activation function and weights varying on two fixed directions. We show that for a certain class of activation functions this lower bound estimate turns into equality. The obtained result provides us with a method for direct computation of the approximation error. As an application, we give a formula, which can be used to compute instantly the approximation error for a class of functions having second order partial derivatives.  相似文献   

17.
近年来 ,前馈神经网络广泛地应用在 Logit回归作为标准统计方法的分析领域 .但却很少作它们之间的直接比较 ,本文是 Logit回归和前馈神经网络“比较研究”的一个尝试 ,说明了一些理论结果和特性 ,讨论了在它们的应用中碰到的一些实际问题 ,还进一步用分析的和模拟的两种方法研究了一些重要的渐近概念、过分拟合以及模型选择等问题 ,最后讨论并给出一些结论  相似文献   

18.
本文主要讨论单个函数平移和伸缩的线性组合对Lp(Rn)中任一紧集内函数的逼近.给出一个很强的逼近结果.这在神经网络应用研究中具有重要意义.  相似文献   

19.
Recently, Li [16] introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights, and obtained the upper and lower bound estimations on the approximation accuracy of the FNNs, for continuous function defined on bounded intervals. In the present paper, we point out that there are some errors both in the definitions of the FNNs and in the proof of the upper estimations in [16]. By using new methods, we also give right approximation rate estimations of the approximation by Li’s neural networks.  相似文献   

20.
随着分段线性函数的广泛应用,本文尝试研究浅层和深层的分段线性神经网络的逼近理论.作者将应用于三层感知机模型的万能逼近定理拓展到分段线性神经网络中,并给出与隐藏神经元个数相关的逼近误差估计.利用分段线性函数构造锯齿函数的显式方法,证明解析函数可以通过分段线性神经网络的深度堆叠以指数速率逼近,并辅以相应的数值实验.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号