首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
梅树立 《经济数学》2012,29(4):8-14
针对非线性Black-Scholes方程,基于quasi-Shannon小波函数给出了一种求解非线性偏微分方程的自适应多尺度小波精细积分法.该方法首先利用插值小波理论构造了用于逼近连续函数的多尺度小波插值算子,利用该算子可以将非线性Black-Scholes方程自适应离散为非线性常微分方程组;然后将用于求解常微分方程组的精细积分法和小波变换的动态过程相结合,并利用非线性处理技术(如同伦分析技术)可有效求解非线性Black-Scholes方程.数值结果表明了该方法在数值精度和计算效率方面的优越性.  相似文献   

2.
在理论研究和实际应用中,神经网络的结构问题一直是个难点.本文利用Vugar E.Ismailov近期的研究成果,讨论了神经网络对样本点的学习问题.结果表明,利用λ-严格递增函数,只需两个隐层节点,就可以学会任意给定的样本集.同时讨论了在隐层节点中使用通常的Sigmoid函数与使用λ-严格递增函数作为活化函数的差别.  相似文献   

3.
为了克服前向神经网络的固有缺陷,提出了基于采样数据建立的含单隐层神经元的模糊前向神经网络.该网络模型利用权值直接确定法得到了最优权值,网络结构可以随采样数据的多少,自主设定隐层神经元,完成了近似插值与精确插值的转换.计算机数值仿真实验表明,模糊前向神经网络具有逼近精度高、网络结构可调和实时性高的优点,并且可以实现预测和去噪.  相似文献   

4.
研究两个混沌时滞神经网络在加入一个新的自适应控制器的条件下达到同步的问题.通过构造一个新的李雅普诺夫函数并结合李雅普诺夫稳定性原理、LMI工具箱和自适应反馈控制原理,得到了两个混沌时滞神经网络自适应同步的条件.最后,给出相应的数值模拟来验证所得结论的有效性.  相似文献   

5.
广义小波变换及其在人工神经网络中的应用   总被引:1,自引:0,他引:1  
相应于非线性系统用人工神经网络的逼近问题,本文引入了一种新的小波变换并研究了其性质.作为推论,本文给出了在Lp范数下单个隐层前馈神经网络逼近定理的构造性证明.  相似文献   

6.
葛彩霞 《应用数学》1999,12(1):47-49
本文研究三层前馈型神经网络的最佳逼近能力,我们证明以多项式函数为隐层神经元作用函数的三层前馈型神经网络,当隐层神经元的个数超过某个给定的界限时,网络的输入输出函数张成一有限维线性空间,从而它可以实现对C(K)的最佳逼近.并且猜测,对非多项式函数的作用函数,若神经元个数有限,则它不具有最佳逼近性质.  相似文献   

7.
文章主要研究了自适应控制下四元数时滞神经网络的有限时间完全同步,通过设计一组有效新颖的自适应控制器,使得主从系统实现有限时间同步,并计算出停息时间的理论估计.利用Lyapunov函数方法和不等式技巧,给出了四元数时滞神经网络主从系统有限时间同步的充分条件.最后,通过数值仿真验证了所得理论结果的有效性.  相似文献   

8.
针对平面弹性问题,首先采用基于最新顶点二分法的网格加密方法,给出一种不需要标记振荡项和加密单元、不需要满足"内节点"性质的自适应有限元方法.其次,通过对各层网格上解函数和误差指示子的分析,利用相邻网格层上解函数的正交性、解函数和真解函数的能量误差的上界估计、相邻网格层上误差指示子的近似压缩性等结果,从理论上严格证明了该自适应有限元方法是收敛的.最后数值实验验证了该自适应有限元方法是收敛的和鲁棒的.  相似文献   

9.
随机设计变量情形回归函数的非线性小波估计 *   总被引:2,自引:0,他引:2       下载免费PDF全文
在随机设计变量情形 ,构造了回归函数的非线性小波估计以及自适应非线性小波估计 .证明了非线性小波估计在Besov空间中可达到最优收敛速度 ,自适应非线性小波估计在一大类Besov空间中可达到次最优收敛速度 ,即和最优收敛速度只相差lnn .这样 ,在随机设计变量情形 ,所构造的回归函数的非线性小波估计和在固定设计点下对回归函数所构造的非线性小波估计几乎具有相同的优良性质 .进一步 ,只要求误差有有界三阶矩 ,而不要求误差服从正态分布 .  相似文献   

10.
在建立小波神经网络模型的基础上,提出了利用小波神经网络对高维非线性系统进行辨识的方法,得出了高维非线性系统的辨识算法,并通过实例仿真说明了系统的泛化能力得到有效提高,获得了具有良好自适应能力的小波网络.  相似文献   

11.
We consider the problem of approximating the Sobolev class of functions by neural networks with a single hidden layer, establishing both upper and lower bounds. The upper bound uses a probabilistic approach, based on the Radon and wavelet transforms, and yields similar rates to those derived recently under more restrictive conditions on the activation function. Moreover, the construction using the Radon and wavelet transforms seems very natural to the problem. Additionally, geometrical arguments are used to establish lower bounds for two types of commonly used activation functions. The results demonstrate the tightness of the bounds, up to a factor logarithmic in the number of nodes of the neural network. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

12.
§1Introduction Inrecentyearstherehasbeengrowinginterestintheproblemofneuralnetworkand relatedapproximation,manyimportantresultsareobtained.Becauseofitsabilityof parallelcomputationinlargescaleandofperfectself-adaptingandapproximation,the neuralnetworkhasbeenwidelyapplied.Theapproximationabilityoftheneuralnetwork dependsonitstopologicalstructure.LetRsbeans-dimensionalEuclidSpaceand(x)isa realfunctiondefinedonRs.When(x)isanexcitationfunctionandx∈Rsisaninput vector,thesimpleneuralnetwork…  相似文献   

13.
单隐层神经网络与最佳多项式逼近   总被引:7,自引:1,他引:6  
研究单隐层神经网络逼近问题.以最佳多项式逼近为度量,用构造性方法估计单隐层神经网络逼近连续函数的速度.所获结果表明:对定义在紧集上的任何连续函数,均可以构造一个单隐层神经网络逼近该函数,并且其逼近速度不超过该函数的最佳多项式逼近的二倍.  相似文献   

14.
According to the characteristics of wood dyeing, we propose a predictive model of pigment formula for wood dyeing based on Radial Basis Function (RBF) neural network. In practical application, however, it is found that the number of neurons in the hidden layer of RBF neural network is difficult to determine. In general, we need to test several times according to experience and prior knowledge, which is lack of a strict design procedure on theoretical basis. And we also don’t know whether the RBF neural network is convergent. This paper proposes a peak density function to determine the number of neurons in the hidden layer. In contrast to existing approaches, the centers and the widths of the radial basis function are initialized by extracting the features of samples. So the uncertainty caused by random number when initializing the training parameters and the topology of RBF neural network is eliminated. The average relative error of the original RBF neural network is 1.55% in 158 epochs. However, the average relative error of the RBF neural network which is improved by peak density function is only 0.62% in 50 epochs. Therefore, the convergence rate and approximation precision of the RBF neural network are improved significantly.  相似文献   

15.
人工神经网络BP算法的改进和结构的自调整   总被引:16,自引:0,他引:16  
本文解决了BP神经网络结构参数和学习速率的选取问题,并对传统的BP算法进行了改进,提出了BP神经网络动态全参数自调整学习算法,又将其编制成计算机程序,使得隐层节点和学习速率的选取全部动态实现,减少了人为因素的干预,改善了学习速率和网络的适应能力。计算结果表明:BP神经网络动态全参数自调整算法较传统的方法优越。训练后的神经网络模型不仅能准确地拟合训练值,而且能较精确地预测未来趋势。  相似文献   

16.
The power generated by wind turbines changes rapidly because of the continuous fluctuation of wind speed and air density. As a consequence, it can be important to predict the energy production, starting from some basic input parameters. The aim of this paper is to show that a two-hidden layer neural network can represent a useful tool to carefully predict the wind energy output. By using proper experimental data (collected from three wind farm in Southern Italy) in combination with a back propagation learning algorithm, a suitable neural architecture is found, characterized by the hyperbolic tangent transfer function in the first hidden layer and the logarithmic sigmoid transfer function in the second hidden layer. Simulation results are reported, showing that the estimated wind energy values (predicted by the proposed network) are in good agreement with the experimental measured values.  相似文献   

17.
This paper presents an MLP‐type neural network with some fixed connections and a backpropagation‐type training algorithm that identifies the full set of solutions of a complete system of nonlinear algebraic equations with n equations and n unknowns. The proposed structure is based on a backpropagation‐type algorithm with bias units in output neurons layer. Its novelty and innovation with respect to similar structures is the use of the hyperbolic tangent output function associated with an interesting feature, the use of adaptive learning rate for the neurons of the second hidden layer, a feature that adds a high degree of flexibility and parameter tuning during the network training stage. The paper presents the theoretical aspects for this approach as well as a set of experimental results that justify the necessity of such an architecture and evaluate its performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
结合5种混凝土延性柱耗能器在低周期反复荷载作用下的试验数据研究,利用神经网络的工作原理,通过建立神经网络的输入层、隐含层、输出层,确定输入单元、输出单元和隐含层节点数,从而建立了BP神经网络的模型,并根据已有的部分试验数据数据.对网络进行训练,对各种混凝土延性柱耗能器骨架曲线进行了预测拟合,实现混凝土延性柱耗能器骨架曲线的数字化,使其成为具有分析和判断的拟合曲线功能,完整的描绘混凝土延性柱耗能器的骨架曲线,为后续混凝土延性柱耗能器性能研究的仿真模拟提供了可靠的数据模型.结果表明,这种方法是可行的.  相似文献   

19.
Here we study the univariate quantitative approximation of real and complex valued continuous functions on a compact interval or all the real line by quasi-interpolation hyperbolic tangent neural network operators. This approximation is derived by establishing Jackson type inequalities involving the modulus of continuity of the engaged function or its high order derivative. Our operators are defined by using a density function induced by the hyperbolic tangent function. The approximations are pointwise and with respect to the uniform norm. The related feed-forward neural network is with one hidden layer.  相似文献   

20.
In the current note, we show that a two hidden layer neural network with d inputs, d   neurons in the first hidden layer, 2d+22d+2 neurons in the second hidden layer and with a specifically constructed sigmoidal and infinitely differentiable activation function can approximate any continuous multivariate function with arbitrary accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号