首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
在理论研究和实际应用中,神经网络的结构问题一直是个难点.本文利用Vugar E.Ismailov近期的研究成果,讨论了神经网络对样本点的学习问题.结果表明,利用λ-严格递增函数,只需两个隐层节点,就可以学会任意给定的样本集.同时讨论了在隐层节点中使用通常的Sigmoid函数与使用λ-严格递增函数作为活化函数的差别.  相似文献   

2.
粘性流体二维涡度方程的一类差分格式   总被引:1,自引:0,他引:1  
郭本瑜 《数学学报》1974,17(4):242-258
本文讨论不可压缩粘性流体二维涡度方程的数值解法。在(Ⅰ)中,把原方程写成守恒型与非守恒型的加权平均形式,并对非线性项部分地隐式—显式加权,从而构造了一类差分格式。接着讨论了各种权的选取方法,并给出误差估计式和若干数值结果.在(Ⅱ)中,证明了二个非线性不等式,它们适用于高维、多层,隐式—显式加权的非线性差分格式的误差估计。应用它们严格证明了上述估计式,并由此得到收敛性。适当地选择各种权,尚可使格式稳定。最后指出本文方法可应用于某些其它高维非线性问题的数值解。  相似文献   

3.
何佳  薛玉梅 《应用数学》2018,31(1):202-207
本文构造一个具有分形几何特点的特殊网络.将网络中心层中的节点作为陷阱点,选择节点和边都加权的方法来研究陷阱捕获问题,并得出的加权平均捕获时间的精确解析公式.  相似文献   

4.
为了克服前向神经网络的固有缺陷,提出了基于采样数据建立的含单隐层神经元的模糊前向神经网络.该网络模型利用权值直接确定法得到了最优权值,网络结构可以随采样数据的多少,自主设定隐层神经元,完成了近似插值与精确插值的转换.计算机数值仿真实验表明,模糊前向神经网络具有逼近精度高、网络结构可调和实时性高的优点,并且可以实现预测和去噪.  相似文献   

5.
一个给定的神经网络能否学习一个给定的样本集一直是一个有趣的问题.对于单隐层前馈神经网络,当隐层神经元的数量和样本的数量相等时,这个问题是平凡的.而对于隐层神经元的数量少于样本数量这种情况,相关讨论还很少,值得更多的关注和研究.本文关于这个问题提出了一种方法,并将其应用到三元XOR问题中,给出了一些初步结果.  相似文献   

6.
隐互补问题在自然科学中的诸多领域有着广泛的应用.研究了一类广义隐互补问题.利用外梯度法的两种改进算法构造了新的投影迭代算法,并将其应用到这类广义隐互补问题中,研究了在伪单调的条件下算法的收敛性,并讨论了新算法的参数和校正步长的选择方法.  相似文献   

7.
该文构造了一类三层前馈自适应小波神经网络,将小波分析中平移因子和伸缩因子的拟合设置为输入层到隐层的权值与阈值,采用小波基函数作为隐层激活函数,并根据梯度下降算法自适应地调整参数.应用自适应小波神经网络数值求解第二类Fredholm积分方程,通过数值算例验证了该方法的可行性和有效性.  相似文献   

8.
利用鞅方法讨论了非齐次隐马尔可夫模型变换的强极限定理,作为特殊情形,将随机选择的概念拓展到非齐次隐马尔可夫模型中,得到了关于有限非齐次隐马尔可夫模型随机选择与随机公平比的若干极限定理.  相似文献   

9.
以节点与权因子修改为基础的4阶NURBS受限形状控制   总被引:1,自引:0,他引:1  
改变k阶NURBS曲线的节点,会产生一个单参数NURBS曲线族,该曲线族的包络是用相同控制顶点定义的k-a阶NURBS曲线,这里a是所改变的节点的重数.论文运用这项理论结果,提出了几种建立在修改一个节点与两个连续权因子基础上的4阶NURBS形状控制方法,该方法要受一定的位置与切线方向的约束.  相似文献   

10.
提出了数值求解一维非定常对流扩散反应方程的一种高精度紧致隐式差分格式,其截断误差为O(τ~4+τ~2h~2+h~4),即格式整体具有四阶精度.差分方程在每一时间层上只用到了三个网格节点,所形成的代数方程组为三对角型,可采用追赶法进行求解,最后通过数值算例验证了格式的精确性和可靠性.  相似文献   

11.
证明了具有单一隐层的神经网络在L_ω~q的逼近,获得了网络逼近的上界估计和下界估计.这一结果揭示了神经网络在加权逼近的意义下,网络的收敛阶与隐层单元个数之间的关系,为神经网络的应用提供了重要的理论基础.  相似文献   

12.
Ricerche di Matematica - Single hidden layer feedforward neural networks can represent multivariate functions that are sums of ridge functions. These ridge functions are defined via an activation...  相似文献   

13.
In this article, we study approximation properties of single hidden layer neural networks with weights varying in finitely many directions and with thresholds from an open interval. We obtain a necessary and simultaneously su?cient measure theoretic condition for density of such networks in the space of continuous functions. Further, we prove a density result for neural networks with a specifically constructed activation function and a fixed number of neurons.  相似文献   

14.
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.  相似文献   

15.
This paper studies approximation capability to L2(Rd) functions of incremental constructive feedforward neural networks(FNN) with random hidden units.Two kinds of therelayered feedforward neural networks are considered:radial basis function(RBF) neural networks and translation and dilation invariant(TDI) neural networks.In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks,we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2(Rd) to any accuracy.Our result shows given any non-zero activation function g :R+→R and g(x Rd) ∈ L2(Rd) for RBF hidden units,or any non-zero activation function g(x) ∈ L2(Rd) for TDI hidden units,the incremental network function fn with randomly generated hidden units converges to any target function in L2(Rd) with probability one as the number of hidden units n→∞,if one only properly adjusts the weights between the hidden units and output unit.  相似文献   

16.
This paper deals with feedforward neural networks containing a single hidden layer and with sigmoid/logistic activation function. Training such a network is equivalent to implementing nonlinear regression using a flexible functional form, but the functional form in question is not easy to deal with. The Chebyshev polynomials are suggested as a way forward, providing an approximation to the network which is superior to Taylor series expansions. Application of these approximations suggests that the network is liable to a ‘naturally occurring’ parameter redundancy, which has implications for the training process as well as certain statistical implications. On the other hand, parameter redundancy does not appear to damage the fundamental property of universal approximation.   相似文献   

17.
§1Introduction Inrecentyearstherehasbeengrowinginterestintheproblemofneuralnetworkand relatedapproximation,manyimportantresultsareobtained.Becauseofitsabilityof parallelcomputationinlargescaleandofperfectself-adaptingandapproximation,the neuralnetworkhasbeenwidelyapplied.Theapproximationabilityoftheneuralnetwork dependsonitstopologicalstructure.LetRsbeans-dimensionalEuclidSpaceand(x)isa realfunctiondefinedonRs.When(x)isanexcitationfunctionandx∈Rsisaninput vector,thesimpleneuralnetwork…  相似文献   

18.
单隐层神经网络与最佳多项式逼近   总被引:7,自引:1,他引:6  
研究单隐层神经网络逼近问题.以最佳多项式逼近为度量,用构造性方法估计单隐层神经网络逼近连续函数的速度.所获结果表明:对定义在紧集上的任何连续函数,均可以构造一个单隐层神经网络逼近该函数,并且其逼近速度不超过该函数的最佳多项式逼近的二倍.  相似文献   

19.
In 1991,Hornik proved that the collection of single hidden layer feedforward neural networks(SLFNs)with continuous,bounded,and non-constant activation functionσis dense in C(K)where K is a compact set in R~s(see Neural Networks,4(2),251-257(1991)).Meanwhile,he pointed out"Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem".This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere(a.e.)activation functionσon R,the collection of SLFNs is dense in C(K)if and only ifσis un-constant a.e..  相似文献   

20.
It is the aim of this contribution to continue our investigations on a special family of hyperbolic-type linear operators (here, for compactly supported continuous functions on IR n ) which immediately can be interpreted as concrete real-time realizations of three-layer feedforward neural networks with sigma-pi units in the hidden layer. To indicate how these results are connected with density results we start with some introductory theorems on this topic. Moreover, we take a detailed look at the complexity of the generated neural networks in order to achieve global -accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号