首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 343 毫秒
1.
为解决最小二乘支持向量机参数设置的盲目性,利用果蝇优化算法对其参数进行优化选择,进而构建了果蝇优化最小二乘支持向量机混合预测模型.以我国物流需求量预测为例,验证了该模型的可行性和有效性.实例验证结果表明:与单一最小二乘支持向量机和模拟退火算法优化最小二乘支持向量机预测模型相比,该模型不仅能够有效选择参数值,而且预测精度更高.  相似文献   

2.
为了对这种具有非线性特性的时间序列进行预测,提出一种基于混沌最小二乘支持向量机.算法将时间序列在相空间重构得到嵌入维数和时间延滞作为数据样本的选择依据,结合最小二乘法原理和支持向量机构建了基于混沌最小二乘支持向量机的预测模型.利用此预测模型对栾城站土壤含水量时间序列进行了预测.结果表明,经过相空间重构优化了数据样本的选取,通过模型的评价指标,混沌最小二乘支持向量机的预测模型能精确地预测具有非线性特性的时间序列,具有很好的理论和应用价值.  相似文献   

3.
基于LS-SVM的管道腐蚀速率灰色组合预测模型   总被引:1,自引:0,他引:1  
为提高管道腐蚀速率预测精度,建立了一种基于最小二乘支持向量机的灰色组合预测模型.以各种灰色模型对管道腐蚀速率的预测结果作为支持向量机的输入,以管道腐蚀速率的实测值作为支持向量机的输出,采用最小二乘支持向量机回归算法和高斯核函数对支持向量机进行训练,利用训练好的支持向量机进行组合预测.预测模型兼具灰色模型所需原始数据少、建模简单、运算方便的优势和最小二乘支持向量机具有泛化能力强、非线性拟合性好、小样本等特性,弥补了单一预测模型的不足,避免了神经网络组合预测易于陷入局部最优的弱点.模型结构简单、实用,仿真结果验证了其有效性.  相似文献   

4.
为了提高财务困境预测的正确率,减少模型的训练样本数和训练时间,在传统支持向量机(SVM)预测模型的基础上,将遗传算法、信息熵和缩减记忆算法应用于最小二乘支持向量机(LS-SVM),提出了一种基于遗传算法和信息熵的缩减记忆式最小二乘支持向量机预测模型。并独立推导出了适合财务困境预测这一离散序列的熵以及支持向量机核函数的表达式,同时,给出了这一改进模型的实现步骤。实验结果表明,该模型无论是预测正确率,还是训练样本的数量和训练时间,都显著优于最小二乘支持向量机以及传统支持向量机模型。  相似文献   

5.
航材备件是保障航空装备日常训练和作战正常使用的重要影响因素,针对部分航材备件样本数据量少,影响因素多且复杂多变,预测结果与装备系统完好性要求偏差较大等问题.建立基于灰色关联分析(GRA)与偏最小二乘(PLS)及最小二乘向量机(LSSVM)相结合的航材备件预测模型,采集某无人机航材备件数据,通过对统计数据进行灰色关联分析,提取航材备件需求的相关因素作为模型训练样本,确定关键因素,利用偏最小二乘对关键因素特征提取,然后将偏最小二乘特征提取后的数据作为最小二乘向量机输入,进行模型构建及分析.通过实验验证了该方法的可行性与适用性,能够满足无人机航材备件预测的实际需要.  相似文献   

6.
在地铁工程的设计、施工、工后沉降控制过程中,拱顶下沉监测值是反映地下工程结构安全和稳定的重要数据.针对常用的地铁拱顶沉降测模型只能做短期预测,精度不高,且需要一些土的本构参数的问题,将相空间重构、最小二乘支持向量机理论相耦合,建立基于改进C-C方法相空间重构和最小二乘支持向量机的地铁隧洞拱顶沉降混沌时间序列预测模型.经实例演算,模型比传统C-C方法相空间重构、基于最大Lyapunov指数的混沌预测模型、人工神经网络模型拟合效果好,预测精度高.  相似文献   

7.
可靠性分析中的最小二乘支持向量机分类方法   总被引:1,自引:0,他引:1  
为了提高支持向量分类机在处理大样本可靠性问题时的计算效率,将最小二乘支持向量分类机引入到可靠性分析中,使得支持向量机中的二次规划问题转化为求解线性方程组问题,减少了计算量.数值算例表明:基于最小二乘支持向量分类机的可靠性方法与基于支持向量分类机的可靠性方法具有一样的计算精度,而且前者的计算效率明显优于后者.  相似文献   

8.
为了提高财务困境预测的正确率,改善模型预测的效果,将邻域粗糙集和遗传算法应用于对偶约束式最小二乘支持向量机,提出了一种基于邻域粗糙集属性约简的对偶约束式最小二乘支持向量机预测模型.同时,给出了这一改进模型的实现步骤.实证结果表明,通过邻域粗糙集指标预处理和遗传算法参数优化后,不但提高了模型预测的正确率,还降低了模型运行的时间,证实了该模型应用于财务困境预测是有效的.  相似文献   

9.
本文提出了一种新的回归模型,剔除相关性的最小二乘,它有效的克服了变量间的相关性,兼顾到变量的筛选。并与最小二乘、向后删除变量法、偏最小二乘比较分析。发现剔除相关性的最小二乘能很好的处理自变量间多重相关性,对变量进行有效的筛选,克服了回归系数反常的现象。  相似文献   

10.
为快速、准确地进行公路建设项目投资估算,提出了一种新型的公路建设项目投资估算模型.该模型首先基于独立分量分析技术,根据最小互信息原理,有效分离出公路建设项目投资估算的独立影响因素源.然后,将这些独立影响因素源用于最小二乘支持向量机的训练,从而建立了基于独立分量分析技术—最小二乘支持向量机的公路建设项目投资估算模型.该模型将独立分量分析技术的盲信号分离能力与最小二乘支持向量机处理有限样本条件下非线性回归问题的优势有机结合,提高了模型预测的准确性.  相似文献   

11.
Kernel logistic regression (KLR) is a very powerful algorithm that has been shown to be very competitive with many state-of the art machine learning algorithms such as support vector machines (SVM). Unlike SVM, KLR can be easily extended to multi-class problems and produces class posterior probability estimates making it very useful for many real world applications. However, the training of KLR using gradient based methods or iterative re-weighted least squares can be unbearably slow for large datasets. Coupled with poor conditioning and parameter tuning, training KLR can quickly design matrix become infeasible for some real datasets. The goal of this paper is to present simple, fast, scalable, and efficient algorithms for learning KLR. First, based on a simple approximation of the logistic function, a least square algorithm for KLR is derived that avoids the iterative tuning of gradient based methods. Second, inspired by the extreme learning machine (ELM) theory, an explicit feature space is constructed through a generalized single hidden layer feedforward network and used for training iterative re-weighted least squares KLR (IRLS-KLR) and the newly proposed least squares KLR (LS-KLR). Finally, for large-scale and/or poorly conditioned problems, a robust and efficient preconditioned learning technique is proposed for learning the algorithms presented in the paper. Numerical results on a series of artificial and 12 real bench-mark datasets show first that LS-KLR compares favorable with SVM and traditional IRLS-KLR in terms of accuracy and learning speed. Second, the extension of ELM to KLR results in simple, scalable and very fast algorithms with comparable generalization performance to their original versions. Finally, the introduced preconditioned learning method can significantly increase the learning speed of IRLS-KLR.  相似文献   

12.
Least squares support vector machine (LS-SVM) for nonlinear regression is sensitive to outliers in the field of machine learning. Weighted LS-SVM (WLS-SVM) overcomes this drawback by adding weight to each training sample. However, as the number of outliers increases, the accuracy of WLS-SVM may decrease. In order to improve the robustness of WLS-SVM, a new robust regression method based on WLS-SVM and penalized trimmed squares (WLSSVM–PTS) has been proposed. The algorithm comprises three main stages. The initial parameters are obtained by least trimmed squares at first. Then, the significant outliers are identified and eliminated by the Fast-PTS algorithm. The remaining samples with little outliers are estimated by WLS-SVM at last. The statistical tests of experimental results carried out on numerical datasets and real-world datasets show that the proposed WLSSVM–PTS is significantly robust than LS-SVM, WLS-SVM and LSSVM–LTS.  相似文献   

13.
支持向量机回归方法在地表水水质评价中的应用   总被引:2,自引:0,他引:2  
将支持向量机方法应用于地表水质评价问题中,建立了多指标水质综合评价的支持向量机回归模型.在地表水质评价标准的基础上采用内插法获得学习样本,经过训练,得到水质评价的分类区间;然后以实测资料对所建模型进行检验,研究结果表明,支持向量机回归模型性能良好、预测精度高、简便易行,是水质评价的一种有效方法,具有广阔的应用前景.  相似文献   

14.
Previous studies on financial distress prediction (FDP) almost construct FDP models based on a balanced data set, or only use traditional classification methods for FDP modelling based on an imbalanced data set, which often results in an overestimation of an FDP model’s recognition ability for distressed companies. Our study focuses on support vector machine (SVM) methods for FDP based on imbalanced data sets. We propose a new imbalance-oriented SVM method that combines the synthetic minority over-sampling technique (SMOTE) with the Bagging ensemble learning algorithm and uses SVM as the base classifier. It is named as SMOTE-Bagging-based SVM-ensemble (SB-SVM-ensemble), which is theoretically more effective for FDP modelling based on imbalanced data sets with limited number of samples. For comparative study, the traditional SVM method as well as three classical imbalance-oriented SVM methods such as cost-sensitive SVM, SMOTE-SVM, and data-set-partition-based SVM-ensemble are also introduced. We collect an imbalanced data set for FDP from the Chinese publicly traded companies, and carry out 100 experiments to empirically test its effectiveness. The experimental results indicate that the new SB-SVM-ensemble method outperforms the traditional methods and is a useful tool for imbalanced FDP modelling.  相似文献   

15.
The support vector machine (SVM) is known for its good performance in two-class classification, but its extension to multiclass classification is still an ongoing research issue. In this article, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in two-class classification, but also can naturally be generalized to the multiclass case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the support points of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a potential computational advantage over the SVM.  相似文献   

16.
In this paper, we propose a robust support vector regression with a novel generic nonconvex quadratic ε-insensitive loss function. The proposed method is robust to outliers or noise since it can adaptively control the loss value and decrease the negative influence of outliers or noise on the decision function by adjusting the elastic interval parameter and adaptive robustification parameter. Given the nature of the nonconvexity of the optimization problem, a concave-convex programming procedure is employed to solve the proposed problem. Experimental results on two artificial data sets and three real-world data sets indicate that the proposed method outperforms support vector regression, L1-norm support vector regression, least squares support vector regression, robust least squares support vector regression, and support vector regression with the Huber loss function on both robustness and generalization ability.  相似文献   

17.
利用最小二乘支持向量机(LSSVM)建立土体残余强度模型,以液限、塑性指数、粘粒含量和偏差等为输入变量,通过改变输入变量的结构建立2个LSSVM模型,并采用粒子群优化(PSO)算法设定模型参数,分别预测残余摩擦角值,并与实验值、人工神经网络(ANN)模型作比较,得出LSSVM模型具有较好的效果,另外对LSSVM的输入变量进行敏感性分析,得出偏差对模型的影响最大,印证文献中结论并说明模型的合理性。  相似文献   

18.
In this paper, we propose deep partial least squares for the estimation of high-dimensional nonlinear instrumental variable regression. As a precursor to a flexible deep neural network architecture, our methodology uses partial least squares for dimension reduction and feature selection from the set of instruments and covariates. A central theoretical result, due to Brillinger (2012) Selected Works of Daving Brillinger. 589-606, shows that the feature selection provided by partial least squares is consistent and the weights are estimated up to a proportionality constant. We illustrate our methodology with synthetic datasets with a sparse and correlated network structure and draw applications to the effect of childbearing on the mother's labor supply based on classic data of Chernozhukov et al. Ann Rev Econ. (2015b):649–688. The results on synthetic data as well as applications show that the deep partial least squares method significantly outperforms other related methods. Finally, we conclude with directions for future research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号