首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 718 毫秒
1.
针对极限学习机的随机性较大的问题,提出一种基于差分演化的极限学习机算法模型(DE-ELM).采用差分演化算法(DE)对极限学习机(ELM)随机给定的输入权值矩阵和隐含层阈值进行寻优,降低了随机性给ELM造成的影响,减少ELM网络震荡,提高了ELM预测精度.并且将DE-ELM应用在电池SOC的预测上,同时与ELM和BP神经网络的预测进行了对比,结果表明:DE-ELM在电池SOC预测上的表现优于ELM和BP神经网络,能满足电池SOC的预测精度要求.  相似文献   

2.
提出一种基于数据集分割的极限学习机集成算法——DS-E-ELM.该算法主要包含以下3个步骤:首先,将数据集分成互不相关的κ个子集,选择κ一1个子集组合成一个训练集,这样可以得到κ个不同的数据集;然后将新得到的κ个数据集利用极限学习机训练得到κ个分类器;最后对κ个分类器预测得到的结果通过多数投票的方法决定预测结果.通过对6个肿瘤数据集的实验证明,DS-E-ELM与单独的ELM、Bagging、Boosting等算法相比,具有更高的分类精度,且稳定性更好.  相似文献   

3.
针对建筑工程施工成本管理中成本难以预测的问题,提出用鸟群算法(BSA)优化极限学习机(ELM)模型的参数.首先,利用BSA对ELM模型的输入权值和偏置值进行优化;其次,构建出BSA-ELM建筑工程施工成本预测模型;最后,将BSA-ELM模型与实际工程施工成本数据相结合进行验证.结果表明:模型在成本预测中的精度比ELM模型、CSO-ELM模型、PSO-ELM模型和BP神经网络模型预测精度高,也为类似预测问题提供了一种新的预测方法.  相似文献   

4.
周晓剑  肖丹  付裕 《运筹与管理》2022,31(8):137-142
传统的面向支持向量回归的一次性建模算法中样本增加时,均需从头开始学习,而增量式算法可以充分利用上一阶段的学习成果。SVR的增量算法通常基于ε-不敏感损失函数,该损失函数对大的异常值比较敏感,而Huber损失函数对异常值敏感度低。所以在有噪声的情况下,Huber损失函数是比ε-不敏感损失函数更好的选择,在现实情况当中。基于此,本文提出了一种基于Huber损失函数的增量式Huber-SVR算法,该算法能够持续地将新样本信息集成到已经构建好的模型中,而不是重新建模。与增量式ε-SVR算法和增量式RBF算法相比,在对真实数据进行预测建模时,增量式Huber-SVR算法具有更高的预测精度。  相似文献   

5.
鉴于影响工程施工成本因素之间复杂的非线性关系,进行准确的工程施工成本预测有一定难度,提出鸡群算法(CSO)和极限学习机(ELM)结合的CSO-ELM工程施工成本预测模型.首先利用CSO对ELM模型的输入权值及偏置值进行全局搜索寻优,得到最佳参数;然后将该参数代入ELM模型中建立CSO-ELM工程施工成本预测模型;最后以11个气膜钢筋混凝土储仓工程为例,验证该模型的科学性.结果表明:CSO优化ELM的输入权值与偏置值是有效的;与传统ELM、BP神经网络模型相比,CSO-ELM模型具有更高的预测精度及效率,为工程施工成本预测提供了一个有效的方法.  相似文献   

6.
标准的加权超限学习机在训练不平衡数据集时,只对不同类之间赋予类权值而没有对个体的样本赋予不同的权值,忽视了样本个体的差异.针对这种情况,利用标准的超限学习机估算个体样本的权值,并与类权值结合,提出了一种改进的双重加权超限学习机分类算法,算法能很好地处理分类任务中各类训练数据分布不平衡的情形.实验结果表明,双重加权超限学习机分类算法与单重加权超限学习机、无权超限学习机相比较,在提高分类精度方面取得了较好的效果.  相似文献   

7.
提出一种在分布式环境中利用共轭梯度法优化二次损失函数的算法,该算法利用本地子机器局部损失函数的一阶导数信息更新迭代点,在每次迭代中执行两轮通信,通过通信协作使主机器上的损失函数之和最小化.经过理论分析,证明该算法具有线性收敛性.在模拟数据集上与分布式交替方向乘子法进行对比,结果表明分布式共轭梯度算法更匹配于集中式性能....  相似文献   

8.
针对高维数据中存在冗余以及极限学习机(ELM)存在随机给定权值导致算法性能不稳定等问题,将限制玻尔兹曼机(RBM)与ELM相结合提出了基于限制玻尔兹曼机优化的极限学习机算法(RBM-ELM).通过限制玻尔兹曼机对原始数据进行特征降维的同时,得到ELM输入层权值和隐含层偏置的优化参数.实验结果表明,相比较随机森林,逻辑回归,支持向量机和极限学习机四种机器学习算法,RBM-ELM算法能获得较高的分类精度.  相似文献   

9.
针对传统卷积神经网络(CNN)中Sigmod激活函数求导计算量大,提取SAR图像特征效率不高的问题,本文将CNN中的Sigmod激活函数改进为Relu激活函数,并结合极限学习机(ELM)算法,提出了基于CNN-ELM算法的SAR图像识别算法,通过对SAR图像进行分类识别的实验表明,该算法能实现网络的稀疏性,缓解过拟合问题,加快网络的收敛速度,并且具有更高的识别率.  相似文献   

10.
传统的机器学习方法无法捕捉到电力负荷需求的不确定性以及动态变化规律.本文将最新提出的隐马尔可夫模型在线学习算法应用于电力负荷预测研究,充分提取历史数据中的不确定性特征和动态变化规律,并结合分解算法,更精确利用数据中的动态变化特征,从而提高预测精度.算法基于隐马尔可夫概率预测模型,在获得新样本时对模型进行在线更新,适应最新数据;利用STL时序分解算法对负荷数据进行分解,使具有不同不确定性和动态变化规律的分量分离开,再分别使用在线学习算法对不同特征的分量进行预测,构造电力负荷预测组合算法.基于三个公开电力负荷数据集的测试结果表明,相比于单一的在线学习模型,本文提出的组合算法提高了预测精度,预测相对误差最高减少了27%.  相似文献   

11.
Kernel logistic regression (KLR) is a very powerful algorithm that has been shown to be very competitive with many state-of the art machine learning algorithms such as support vector machines (SVM). Unlike SVM, KLR can be easily extended to multi-class problems and produces class posterior probability estimates making it very useful for many real world applications. However, the training of KLR using gradient based methods or iterative re-weighted least squares can be unbearably slow for large datasets. Coupled with poor conditioning and parameter tuning, training KLR can quickly design matrix become infeasible for some real datasets. The goal of this paper is to present simple, fast, scalable, and efficient algorithms for learning KLR. First, based on a simple approximation of the logistic function, a least square algorithm for KLR is derived that avoids the iterative tuning of gradient based methods. Second, inspired by the extreme learning machine (ELM) theory, an explicit feature space is constructed through a generalized single hidden layer feedforward network and used for training iterative re-weighted least squares KLR (IRLS-KLR) and the newly proposed least squares KLR (LS-KLR). Finally, for large-scale and/or poorly conditioned problems, a robust and efficient preconditioned learning technique is proposed for learning the algorithms presented in the paper. Numerical results on a series of artificial and 12 real bench-mark datasets show first that LS-KLR compares favorable with SVM and traditional IRLS-KLR in terms of accuracy and learning speed. Second, the extension of ELM to KLR results in simple, scalable and very fast algorithms with comparable generalization performance to their original versions. Finally, the introduced preconditioned learning method can significantly increase the learning speed of IRLS-KLR.  相似文献   

12.
为了捕捉农产品市场期货价格波动的复杂特征,进一步提高其预测精度,基于分解集成的思想,构建包含变分模态分解(VMD)和极限学习机(ELM)的分解集成预测模型。首先,利用VMD分解的自适应性和非递归性,选择VMD将复杂时间序列分解成多个模态分量(IMF)。其次,针对VMD分解关键参数模态数K的选取难题,提出基于最小模糊熵准则寻找最优K值的方法,有效避免模态混淆和端点效应问题,从而提升VMD的分解能力。最后,利用ELM强大的学习能力和泛化能力,对VMD分解得到的不同尺度子序列进行预测,集成得到最终预测结果。以CBOT交易所稻谷、小麦、豆粕期货价格作为研究对象,实证结果表明,该分解集成预测模型在预测精度和方向性指标上,显著优于单预测模型和其它分解集成预测模型,为农产品期货价格预测提供了一种新途径。  相似文献   

13.
Extreme learning machine (ELM) not only is an effective classifier in supervised learning, but also can be applied on unsupervised learning and semi-supervised learning. The model structure of unsupervised extreme learning machine (US-ELM) and semi-supervised extreme learning machine (SS-ELM) are same as ELM, the difference between them is the cost function. We introduce kernel function to US-ELM and propose unsupervised extreme learning machine with kernel (US-KELM). And SS-KELM has been proposed. Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and Wavelet kernel functions have been widely used in support vector machine. Therefore, to realize a combination of the wavelet kernel function, US-ELM, and SS-ELM, unsupervised extreme learning machine with wavelet kernel function (US-WKELM) and semi-supervised extreme learning machine with wavelet kernel function (SS-WKELM) are proposed in this paper. The experimental results show the feasibility and validity of US-WKELM and SS-WKELM in clustering and classification.  相似文献   

14.
Kernel extreme learning machine (KELM) increases the robustness of extreme learning machine (ELM) by turning linearly non-separable data in a low dimensional space into a linearly separable one. However, the internal power parameters of ELM are initialized at random, causing the algorithm to be unstable. In this paper, we use the active operators particle swam optimization algorithm (APSO) to obtain an optimal set of initial parameters for KELM, thus creating an optimal KELM classifier named as APSO-KELM. Experiments on standard genetic datasets show that APSO-KELM has higher classification accuracy when being compared to the existing ELM, KELM, and these algorithms combining PSO/APSO with ELM/KELM, such as PSO-KELM, APSO-ELM, PSO-ELM, etc. Moreover, APSO-KELM has good stability and convergence, and is shown to be a reliable and effective classification algorithm.  相似文献   

15.
当今道路交通状态对城市管理和人们出行愈加重要,影响着人类生活的方方面面.以深圳交通为研究对象,由基础车辆数据和道路坐标构建了路网系统,从车辆速度和密度两个方面导出了交通流状态评价指数TSI.利用深度学习长短期记忆神经网络(LSTM)对车辆速度和密度两个指标进行预测,并通过对比极限学习机(ELM),时间序列(ARMA)和BP神经网络,进行仿真实验,结果表明相对于传统预测模型,所采用的LSTM网络具有更优的预测精确度和对远期预测的稳定性.最后利用预测结果计算出更能直观反映出道路交通拥堵情况的TSI指数,为人们提供了准确的交通状态预测.  相似文献   

16.
In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input–output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness.  相似文献   

17.
提出了一种基于小波变换和改进萤火虫优化极限学习机的短期负荷预测方法.通过小波分解和重构,对原始负荷序列进行降噪;在模型训练阶段利用改进的萤火虫算法优化极限学习机参数,获得各序列的最优模型;针对各子序列分别预测叠加得到最终预测值.通过在两种时间尺度的数据序列上进行数值计算,与传统的ARMA、BP神经网络、支持向量机及LSSVM等多种经典预测模型相比,模型预测效果更优.  相似文献   

18.
In this paper we study the problem of learning the gradient function with application to variable selection and determining variable covariation. Firstly, we propose a novel unifying framework for coordinate gradient learning from the perspective of multi-task learning. Various variable selection methods can be regarded as special instances of this framework. Secondly, we formulate the dual problems of gradient learning with general loss functions. This enables the direct application of standard optimization toolboxes to the case of gradient learning. For instance, gradient learning with SVM loss can be solved by quadratic programming (QP) routines. Thirdly, we propose a novel gradient learning formulation which can be cast as a learning the kernel matrix problem. Its relation with sparse regularization is highlighted. A semi-infinite linear programming (SILP) approach and an iterative optimization approach are proposed to efficiently solve this problem. Finally, we validate our proposed approaches on both synthetic and real datasets.  相似文献   

19.
Based on the proper orthogonal decomposition⁃radial basis function (POD⁃RBF), a geometric identification method for pipeline inner wall was proposed to solve the internal corrosion detection problem of natural gas and oil pipelines. In view of the static magnetic field, the simplified finite element model for the pipelines was established, and the variable⁃geometry sample library was constructed, to realize the response prediction of arbitrary geometry by the POD⁃RBF. The proposed method achieves reduced⁃order analysis and avoids repeated solution of the stiffness matrix due to the geometrical change during the identification process. Hence, it can significantly improve the computation efficiency. Finally, the grey wolf optimization (GWO) algorithm was used to optimize the objective function and avoid the calculation of the sensitivity in the process of geometry change. The numerical examples show that, the proposed method has high efficiency and accuracy in the geometric identification of the pipeline inner wall, with good stability even under introduced noises. © 2023 Editorial Office of Applied Mathematics and Mechanics. All rights reserved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号