首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
多层神经网络的一个快速算法   总被引:5,自引:0,他引:5  
本文对文[4]提出的前馈式多层神经网络的单参数动态搜索(SPDS)算法进行了深入的分析,给出了实现快速一维搜索的两个方案,从而实现了多层神经网络更为快速的学习训练.  相似文献   

2.
针对极限学习机的随机性较大的问题,提出一种基于差分演化的极限学习机算法模型(DE-ELM).采用差分演化算法(DE)对极限学习机(ELM)随机给定的输入权值矩阵和隐含层阈值进行寻优,降低了随机性给ELM造成的影响,减少ELM网络震荡,提高了ELM预测精度.并且将DE-ELM应用在电池SOC的预测上,同时与ELM和BP神经网络的预测进行了对比,结果表明:DE-ELM在电池SOC预测上的表现优于ELM和BP神经网络,能满足电池SOC的预测精度要求.  相似文献   

3.
该文构造了一类三层前馈自适应小波神经网络,将小波分析中平移因子和伸缩因子的拟合设置为输入层到隐层的权值与阈值,采用小波基函数作为隐层激活函数,并根据梯度下降算法自适应地调整参数.应用自适应小波神经网络数值求解第二类Fredholm积分方程,通过数值算例验证了该方法的可行性和有效性.  相似文献   

4.
针对高维数据中存在冗余以及极限学习机(ELM)存在随机给定权值导致算法性能不稳定等问题,将限制玻尔兹曼机(RBM)与ELM相结合提出了基于限制玻尔兹曼机优化的极限学习机算法(RBM-ELM).通过限制玻尔兹曼机对原始数据进行特征降维的同时,得到ELM输入层权值和隐含层偏置的优化参数.实验结果表明,相比较随机森林,逻辑回归,支持向量机和极限学习机四种机器学习算法,RBM-ELM算法能获得较高的分类精度.  相似文献   

5.
一种快速且全局收敛的BP神经网络学习算法   总被引:1,自引:0,他引:1  
目前误差反向传播(BP)算法在训练多层神经网络方面有很多成功的应用.然而,BP算法也有一些不足:收敛缓慢和易陷入局部极小点等.提出一种快速且全局收敛的BP神经网络学习算法,并且对该优化算法的全局收敛性进行分析和详细证明.实证结果表明提出的算法比标准的BP算法效率更高且更精确.  相似文献   

6.
针对武汉钢铁集团公司大型轧钢厂当前在高速线材生产线中存在的水冷控制系统可靠性差,轧线温度波动范围大等问题,应用智能计算理论及方法对上述工业控制系统进行系统辨识、建模以及优化.分析比较了基于梯度下降搜索BP算法、径向基函数网络、Levenberg Marquardt BP算法的前馈神经网络对SMS水冷系统的逼近精度、训练速度.研究了采用Levenberg-Marquardt BP算法的前馈神经网络在样本集和测试集上的表现,建立了基于Levenberg-Marquardt BP算法的前馈神经网络水冷控制系统模型.解决了高速线材水冷控制系统可靠性,温度控制精度问题.  相似文献   

7.
BP神经网络在铁路客运市场时间序列预测中的应用   总被引:17,自引:1,他引:16  
铁路客运市场受多个因素的影响,而且这些作用多是非线性的。时间序列预测实质上是实现一个非线性映射。由于具有任意个隐层节点的前馈神经网络可以以任意精度逼近一个连续函数,因此,目前得到普遍应用的是采用BP算法的多层前馈神经网络。本探讨用人工神经网络的反向传播(BP)算法研究铁路客运市场的时间序列预测。数值计算结果表明该方法预测精度较高,方法简单易行,为铁路客运市场预测研究提供了新的途径。  相似文献   

8.
引荐了运用粒子群算法和遗传算法优化多层前馈神经网络结构预测破产的方法.融合了粒子群算法、遗传算法和神经网络众多优点,自适应和并行地搜寻神经网络最优的结构,由此构建优化的预测模型.采用源自UCI机器学习数据库的破产和非破产混合样本数据集,随机地从数据集中读取数据并进行数据预处理,运用7重交叉校验方法客观地评价预测结果.仿真证明,方法能自动有效地构建神经网络的优化结构,具有更快的学习速度和更好的推广性能.与其它方法相比,方法具有更高的破产预测准确率.  相似文献   

9.
提出了一种基于小波变换和改进萤火虫优化极限学习机的短期负荷预测方法.通过小波分解和重构,对原始负荷序列进行降噪;在模型训练阶段利用改进的萤火虫算法优化极限学习机参数,获得各序列的最优模型;针对各子序列分别预测叠加得到最终预测值.通过在两种时间尺度的数据序列上进行数值计算,与传统的ARMA、BP神经网络、支持向量机及LSSVM等多种经典预测模型相比,模型预测效果更优.  相似文献   

10.
冰雹是一种强烈的自然灾害,为了提前对冰雹进行预测,提出了一种BP神经网络做预测,极限学习机做识别的预测方法.方法以雷达回波反射率图像的均值、方差、熵为判别指标,首先训练好能够分类云层特征的极限学习机,其次结合雹云形成过程,使用经过较大样本训练的BP神经网络对未来30分钟数据预测,最后使用经过训练的极限学习机进行云层的类型判别.实验结果表明,BP神经网络能够平均提前5-20分钟预测出降雹,具有较好的预测效果.  相似文献   

11.
In this era of big data, more and more models need to be trained to mine useful knowledge from large scale data. It has become a challenging problem to train multiple models accurately and efficiently so as to make full use of limited computing resources. As one of ELM variants, online sequential extreme learning machine (OS-ELM) provides a method to learn from incremental data. MapReduce, which provides a simple, scalable and fault-tolerant framework, can be utilized for large scale learning. In this paper, we propose an efficient parallel method for batched online sequential extreme learning machine (BPOS-ELM) training using MapReduce. Map execution time is estimated with historical statistics, where regression method and inverse distance weighted interpolation method are used. Reduce execution time is estimated based on complexity analysis and regression method. Based on the estimations, BPOS-ELM generates a Map execution plan and a Reduce execution plan. Finally, BPOS-ELM launches one MapReduce job to train multiple OS-ELM models according to the generated execution plan, and collects execution information to further improve estimation accuracy. Our proposal is evaluated with real and synthetic data. The experimental results show that the accuracy of BPOS-ELM is at the same level as those of OS-ELM and parallel OS-ELM (POS-ELM) with higher training efficiencies.  相似文献   

12.
Kernel logistic regression (KLR) is a very powerful algorithm that has been shown to be very competitive with many state-of the art machine learning algorithms such as support vector machines (SVM). Unlike SVM, KLR can be easily extended to multi-class problems and produces class posterior probability estimates making it very useful for many real world applications. However, the training of KLR using gradient based methods or iterative re-weighted least squares can be unbearably slow for large datasets. Coupled with poor conditioning and parameter tuning, training KLR can quickly design matrix become infeasible for some real datasets. The goal of this paper is to present simple, fast, scalable, and efficient algorithms for learning KLR. First, based on a simple approximation of the logistic function, a least square algorithm for KLR is derived that avoids the iterative tuning of gradient based methods. Second, inspired by the extreme learning machine (ELM) theory, an explicit feature space is constructed through a generalized single hidden layer feedforward network and used for training iterative re-weighted least squares KLR (IRLS-KLR) and the newly proposed least squares KLR (LS-KLR). Finally, for large-scale and/or poorly conditioned problems, a robust and efficient preconditioned learning technique is proposed for learning the algorithms presented in the paper. Numerical results on a series of artificial and 12 real bench-mark datasets show first that LS-KLR compares favorable with SVM and traditional IRLS-KLR in terms of accuracy and learning speed. Second, the extension of ELM to KLR results in simple, scalable and very fast algorithms with comparable generalization performance to their original versions. Finally, the introduced preconditioned learning method can significantly increase the learning speed of IRLS-KLR.  相似文献   

13.
A neural fuzzy control system with structure and parameter learning   总被引:8,自引:0,他引:8  
A general connectionist model, called neural fuzzy control network (NFCN), is proposed for the realization of a fuzzy logic control system. The proposed NFCN is a feedforward multilayered network which integrates the basic elements and functions of a traditional fuzzy logic controller into a connectionist structure which has distributed learning abilities. The NFCN can be constructed from supervised training examples by machine learning techniques, and the connectionist structure can be trained to develop fuzzy logic rules and find membership functions. Associated with the NFCN is a two-phase hybrid learning algorithm which utilizes unsupervised learning schemes for structure learning and the backpropagation learning scheme for parameter learning. By combining both unsupervised and supervised learning schemes, the learning speed converges much faster than the original backpropagation algorithm. The two-phase hybrid learning algorithm requires exact supervised training data for learning. In some real-time applications, exact training data may be expensive or even impossible to obtain. To solve this problem, a reinforcement neural fuzzy control network (RNFCN) is further proposed. The RNFCN is constructed by integrating two NFCNs, one functioning as a fuzzy predictor and the other as a fuzzy controller. By combining a proposed on-line supervised structure-parameter learning technique, the temporal difference prediction method, and the stochastic exploratory algorithm, a reinforcement learning algorithm is proposed, which can construct a RNFCN automatically and dynamically through a reward-penalty signal (i.e., “good” or “bad” signal). Two examples are presented to illustrate the performance and applicability of the proposed models and learning algorithms.  相似文献   

14.
构造一种新型神经Mealy机,神经Mealy机具有一定的学习能力,它主要通过学习来获得(von Newman)计算机结构,可以较好地避免普通计算机那样损毁一条电路就带来灾难性后果的情况.其本质是将递归神经网络通过BP优化算法,对Mealy机进行模拟得到,并通过实验对该网络的学习性能进行研究分析.基于形式文法和自动机的等价性,用神经网络来实现文法推导.先采用神经网络对样本集进行学习,这些样本可由一个经典Mealy机生成,然后从训练完的神经网络提取出自动机.  相似文献   

15.
In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input–output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness.  相似文献   

16.
Artificial neural networks have, in recent years, been very successfully applied in a wide range of areas. A major reason for this success has been the existence of a training algorithm called backpropagation. This algorithm relies upon the neural units in a network having input/output characteristics that are continuously differentiable. Such units are significantly less easy to implement in silicon than are neural units with Heaviside (step-function) characteristics. In this paper, we show how a training algorithm similar to backpropagation can be developed for 2-layer networks of Heaviside units by treating the network weights (i.e., interconnection strengths) as random variables. This is then used as a basis for the development of a training algorithm for networks with any number of layers by drawing upon the idea of internal representations. Some examples are given to illustrate the performance of these learning algorithms.  相似文献   

17.
Two parallel shared-memory algorithms are presented for the optimization of generalized networks. These algorithms are based on the allocation of arc-related operations in the (generalized) network simplex method. One method takes advantage of the multi-tree structure of basic solutions and performs pivot operations in parallel, utilizing locking to ensure correctness. The other algorithm utilizes only one processor for sequential pivoting, but parallelizes the pricing operation and overlaps this task with pivoting in a speculative manner (i.e. since pivoting and pricing involve data dependencies, a candidate for flow change generated by the pricing processors is not guaranteed to be acceptable, but in practice generally has this property). The relative performance of these two methods (on the Sequent Symmetry S81 multiprocessor) is compared and contrasted with that of a fast sequential algorithm on a set of large-scale test problems of up to 1,000,000 arcs.This research was supported in part by NSF grant CCR-8709952 and AFOSR grant AFOSR-86-0194.  相似文献   

18.
Kernel extreme learning machine (KELM) increases the robustness of extreme learning machine (ELM) by turning linearly non-separable data in a low dimensional space into a linearly separable one. However, the internal power parameters of ELM are initialized at random, causing the algorithm to be unstable. In this paper, we use the active operators particle swam optimization algorithm (APSO) to obtain an optimal set of initial parameters for KELM, thus creating an optimal KELM classifier named as APSO-KELM. Experiments on standard genetic datasets show that APSO-KELM has higher classification accuracy when being compared to the existing ELM, KELM, and these algorithms combining PSO/APSO with ELM/KELM, such as PSO-KELM, APSO-ELM, PSO-ELM, etc. Moreover, APSO-KELM has good stability and convergence, and is shown to be a reliable and effective classification algorithm.  相似文献   

19.
文章结合机器学习中的交叉验证、在线学习和集成学习方法,对基于不同高维协方差估计量的投资策略权重进行动态组合,以获得优于传统投资组合策略的样本外表现.基于这一目标,文章对机器学习中比较前沿的在线加权集成(online weighted ensemble,OWE)算法的样本更新方式、学习模型和目标函数进行了替换和修改,改进后的mixed-OWE算法能够更好地适用于多组合的动态混合策略投资.通过数值模拟,文章将mixed-OWE应用在基于二次效用目标函数的投资问题上,结果表明其样本外表现优于传统静态方法.随后,文章进一步使用A股近10年的数据作为样本对mixed-OWE进行了全局最小方差组合投资,经过一定的参数调整后,mixed-OWE策略实现的组合方差优于其成分组合以及等权重组合.  相似文献   

20.
GA-BP嵌套算法的理论及应用   总被引:2,自引:0,他引:2  
分析了BP算法、遗传算法以及GA-BP-APARTING算法的特点,提出了GA-BP-NESTING算法.在人工神经网络的在线学习和离线学习方式下,分别对BP算法、GA算法、GA-BP-APARTING算法和GA-BP-NESTING算法进行了比较研究,研究发现:第一,网络初始权值的赋值对人工神经网络训练影响很大;第二,离线学习方式下GA-BP-NESTING算法效果最佳.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号