首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 134 毫秒
1.
提出了基于总体平均经验模态分解(EEMD)、最小二乘支持向量机(LSSVM)和BP神经网络的实用综合短期负荷预测方法,进行电力系统短期负荷预测.首先运用EEMD方法将非平稳的负荷序列分解,然后根据分解后各分量的特点选用最佳的核函数,利用最小二乘支持向量机分别对各分量进行预测,最后对各分量预测结果采用BP神经网络重构得到最终的预测结果.对实测数据的分析表明基于该综合方法的电力系统短期负荷预测具有较高的精度.  相似文献   

2.
为了对这种具有非线性特性的时间序列进行预测,提出一种基于混沌最小二乘支持向量机.算法将时间序列在相空间重构得到嵌入维数和时间延滞作为数据样本的选择依据,结合最小二乘法原理和支持向量机构建了基于混沌最小二乘支持向量机的预测模型.利用此预测模型对栾城站土壤含水量时间序列进行了预测.结果表明,经过相空间重构优化了数据样本的选取,通过模型的评价指标,混沌最小二乘支持向量机的预测模型能精确地预测具有非线性特性的时间序列,具有很好的理论和应用价值.  相似文献   

3.
考虑到诸如金融危机等重大事件的影响,时间序列可能存在异常值,提出了一个基于局部异常因子(LOF)的LOF-SSA-LSSVM预测模型,并将其应用于广州港集装箱吞吐量预测.首先,对原始时间序列进行X12加法季节分解,对于分解得到的不规则序列,采用LOF算法进行异常值检测,确定异常数据的位置,之后通过插值或最小二乘支持向量机(LSSVM)的预测值来修正原始季节调整序列中的异常值,将修正后的季节调整序列与季节因子序列加和,即得到新的待预测序列.预测阶段,先采用奇异谱分析(SSA)将新的待预测序列进行分解重构,剔除序列中的噪声,然后再采用LSSVM对其进行预测.实证结果表明,建立的LOF-SSA-LSSVM模型相比BP、ARIMA等模型有着更好的预测精度.  相似文献   

4.
为解决最小二乘支持向量机参数设置的盲目性,利用果蝇优化算法对其参数进行优化选择,进而构建了果蝇优化最小二乘支持向量机混合预测模型.以我国物流需求量预测为例,验证了该模型的可行性和有效性.实例验证结果表明:与单一最小二乘支持向量机和模拟退火算法优化最小二乘支持向量机预测模型相比,该模型不仅能够有效选择参数值,而且预测精度更高.  相似文献   

5.
在地铁工程的设计、施工、工后沉降控制过程中,拱顶下沉监测值是反映地下工程结构安全和稳定的重要数据.针对常用的地铁拱顶沉降测模型只能做短期预测,精度不高,且需要一些土的本构参数的问题,将相空间重构、最小二乘支持向量机理论相耦合,建立基于改进C-C方法相空间重构和最小二乘支持向量机的地铁隧洞拱顶沉降混沌时间序列预测模型.经实例演算,模型比传统C-C方法相空间重构、基于最大Lyapunov指数的混沌预测模型、人工神经网络模型拟合效果好,预测精度高.  相似文献   

6.
结合偏最小二乘法和支持向量机的优缺点,提出基于偏最小二乘支持向量机的天然气消费量预测模型。首先,利用偏最小二乘法确定影响天然气消费量的新综合变量,建立以新综合变量为输入,天然气消费量为输出的支持向量机模型,对天然气消费量进行了预测;然后,与多元回归、偏最小二乘回归、普通支持向量机做误差检验比较,验证该方法的可行性与正确性。结果表明,此天然气消费量预测模型具有较高的精确度和应用价值。  相似文献   

7.
信用分类是信用风险管理中一个重要环节,其主要目的是根据信用申请客户提供的资料从申请客户中区分出可信客户和违约客户,以便为信用决策者提供决策依据.为了正确区分不同的信用客户,特别是违约客户,结合核主元分析和支持向量机算法构造基于核主元分析的带可变惩罚因子最小二乘模糊支持向量机模型对信用数据进行了分类处理.在基于核主元分析的带可变惩罚因子最小二乘模糊支持向量机模型中,首先对样本数据进行预处理,然后利用核主元分析以非线性方式降低数据的维数,最后利用带可变惩罚因子最小二乘模糊支持向量机模型对降维后数据进行分类分析.为了验证,选择两个公开的信用数据集来进行实证分析.实证结果表明:基于核主元分析的带可变惩罚因子最小二乘模糊支持向量机模型取得了较好的分类结果,可为信用决策者提供重要的决策参考依据.  相似文献   

8.
为了提高财务困境预测的正确率,减少模型的训练样本数和训练时间,在传统支持向量机(SVM)预测模型的基础上,将遗传算法、信息熵和缩减记忆算法应用于最小二乘支持向量机(LS-SVM),提出了一种基于遗传算法和信息熵的缩减记忆式最小二乘支持向量机预测模型。并独立推导出了适合财务困境预测这一离散序列的熵以及支持向量机核函数的表达式,同时,给出了这一改进模型的实现步骤。实验结果表明,该模型无论是预测正确率,还是训练样本的数量和训练时间,都显著优于最小二乘支持向量机以及传统支持向量机模型。  相似文献   

9.
基于LS-SVM的管道腐蚀速率灰色组合预测模型   总被引:1,自引:0,他引:1  
为提高管道腐蚀速率预测精度,建立了一种基于最小二乘支持向量机的灰色组合预测模型.以各种灰色模型对管道腐蚀速率的预测结果作为支持向量机的输入,以管道腐蚀速率的实测值作为支持向量机的输出,采用最小二乘支持向量机回归算法和高斯核函数对支持向量机进行训练,利用训练好的支持向量机进行组合预测.预测模型兼具灰色模型所需原始数据少、建模简单、运算方便的优势和最小二乘支持向量机具有泛化能力强、非线性拟合性好、小样本等特性,弥补了单一预测模型的不足,避免了神经网络组合预测易于陷入局部最优的弱点.模型结构简单、实用,仿真结果验证了其有效性.  相似文献   

10.
针对半监督分类问题,提出了基于凸绝对值不等式的半监督最小二乘支持向量机.传统的半监督支持向量机鲁棒性不强、效率不高,针对这些不足,利用凸绝对值不等式将平面分为两个有重叠的半平面,通过极小化重叠部分以及采用最小二乘支持向量机的思想处理无标签点,提高分类准确率,结果具有一定的鲁棒性.在8个数据集上进行了数值实验,说明了所提出的半监督分类算法的有效性.  相似文献   

11.
在金融时间序列中,一组金融序列可被视为由不同时间段的分段函数拟合连接而成.利用3σ准则确定分段函数的临界点,并根据AIC准则及调整后R2对分段点进行验证,从而分段点把数据分割成两部分.对两序列分别用合适的函数进行拟合,并用ARMA-GARCH模型对残差序列进行修正.由上证综合指数数据的实证分析结果表明:3σ准则能很好地检索出临界点,同时建立的分段函数模型预测效果要优于ARMA与EGARCH模型,以及ARMA-GARCH模型的引入对模型的精确度有所提高.所介绍的方法简单易懂、便于操作、精度高,为金融投资者和学者提供参考价值.  相似文献   

12.
基于最小截平方和估计的监测数据分析方法   总被引:1,自引:0,他引:1  
水工程安全监测数据中不可避免地存在离群点,而应用最为广泛的最小二乘法(least square,LS)不具备剔除离群点的能力,反而更易吸收离群点,使回归曲线严重偏离实际。针对LS在此方面的缺陷,本文在最小化残差平方和理论的基础上,提出采用最小截平方和估计(least trimmed squares,LTS)方法来构建水工程安全监控模型。根据实际工程的监测资料并对监测资料分析处理,剔除离群点得到最优数据群。通过求解最优数据群的回归系数,进而得到最接近实际数据的拟合曲线。相比于LS估计,LTS估计所得结果更具有合理性、稳健性,且能够显著提高数据的预测精度。因此,LTS估计在水工程安全监测等数据分析中具有良好的应用前景。  相似文献   

13.
根据2000年至2009年上海市的人均生产总值和工业重复用水量并结合工业废水排放量的资料,构建了一种基于时间序列的多元非线性回归预测模型,并进行了评估和分析.模型具有较高的拟合精度,能客观反映工业废水的排放量,可以为决策者提供参考,提高相关部门的管理水平.  相似文献   

14.
The improvement to the monitoring and control efficiency of software project effort is a challenge for project management research. We propose to overcome this challenge through the use of a model for the buffer determination and monitoring of software project effort. This software project effort buffer was originally determined on the basis of a risk management factor analysis with total consideration for project managers’ risk preference. The effort buffer was next allocated to different stages according to the buffer allocation cardinal. An effort deviation monitoring and control model was then established based on the grey prediction model, including the establishment of a deviation monitoring and control model, a simulation test of the accuracy and the deviation prediction algorithm flow chart. The method system was eventually applied to an actual project and compared with the actual project data. The results show that the relative error test accuracy of the proposed model is qualified according to the test standard of the grey model, signifying that it could be used for the prediction of effort deviation and decision-making. The proposed model could use the dynamic control system to monitor and control software project effort in an effective manner.  相似文献   

15.
Recently, spline approximations have been proposed for the reconstruction of piecewise smooth functions from Fourier data. That approach makes possible to retrieve the functions from their Fourier coefficients for any given degree of accuracy when the discontinuity points are known. In this paper we present iterative methods based on those spline approximations, for several degrees, to find locations and amplitudes of the jumps of a piecewise smooth function, given its Fourier coefficients. We also present numerical experiments comparing with different previous approaches.  相似文献   

16.
Zero slope regression is an important problem in chemometrics, ranging from challenges of intercept-bias and slope ‘corrections’ in spectrometry, up to analysis of administrative data on chemical pollution in water in the region of Arica and Parinacota. Such issue is really complex and it integrates problems of optimal design, symmetry of errors, stabilization of the variability of estimators, dynamical system for errors up to an administrative data challenges. In this article we introduce a realistic approach to zero slope regression problem from dynamical point of view. Linear regression is a widely used approach for data fitting under assumption of normally distributed residuals. Many times non-normal residuals are observed and also theoretically justified. Our solution to such problem uses the recently introduced inference function called score function of distribution. As a minimization criterion, the minimum information of residuals criterion is used. The score regression appears to be a direct generalization of the least-squares regression for an arbitrary known (believed) distribution of residuals. The score estimation is also distribution sensitive version of M-estimation. The capability of the method is demonstrated by water pollution data examples.  相似文献   

17.
Electrical capacitance tomography (ECT) is a potential measurement technology for industrial process monitoring, but its applicability is generally limited by low-quality tomographic images. Boosting the performance of inverse computing imaging algorithms is the key to improving the reconstruction quality (RQ). Common regularization iteration imaging methods with analytical prior regularizers are less flexible in dealing with actual reconstruction tasks, leading to large reconstruction errors. To address the challenge, this study proposes a new imaging method, including reconstruction model and optimizer. The data-driven regularizer from a new ensemble learning model and the analytical prior regularizer with the focus on the sparsity of imaging objects are combined into a new optimization model for imaging. In the proposed ensemble learning model, the generalized low rank approximations of matrices (GLRAM) method is used to carry out the dimensionality reduction for decreasing the redundancy of the input data and improving the diversity, the extreme learning machine (ELM) serves as a base learner and the nuclear norm based matrix regression (NNMR) method is developed to aggregate the ensemble of solutions. The singular value thresholding method (SVTM) and the fast iterative shrinkage-thresholding algorithm (FISTA) are inserted into the split Bregman method (SBM) to generate a powerful optimizer for the built computational model. Its comparison to other competing methods through numerical experiments on typical imaging targets demonstrates that the developed algorithm reduces reconstruction error and achieves much more improvement in imaging quality and robustness.  相似文献   

18.
We introduce a simple and efficient method to reconstruct an element of a Hilbert space in terms of an arbitrary finite collection of linearly independent reconstruction vectors, given a finite number of its samples with respect to any Riesz basis. As we establish, provided the dimension of the reconstruction space is chosen suitably in relation to the number of samples, this procedure can be implemented in a completely numerically stable manner. Moreover, the accuracy of the resulting approximation is determined solely by the choice of reconstruction basis, meaning that reconstruction vectors can be readily tailored to the particular problem at hand.An important example of this approach is the accurate recovery of a piecewise analytic function from its first few Fourier coefficients. Whilst the standard Fourier projection suffers from the Gibbs phenomenon, by reconstructing in a piecewise polynomial basis we obtain an approximation with root-exponential accuracy in terms of the number of Fourier samples and exponential accuracy in terms of the degree of the reconstruction. Numerical examples illustrate the advantage of this approach over other existing methods.  相似文献   

19.
水资源的合理利用对区域经济社会发展以及促进人与自然的可持续发展至关重要.通过构建模糊综合评价模型,选取年降水量、人均水资源量、水资源利用率、万元GDP用水量、万元工业增加值用水量、农田灌溉亩均用水量、生态用水等7个指标对赣州市2009-2018年水资源承载力进行动态评价研究,分析近十年该地区水资源承载力演变趋势以及影响该地区水资源承载力的主要因素.结果表明:1)赣州市水资源综合承载力较高,水资源还有进一步开发利用的空间;2)2009-2018年赣州市水资源承载力整体上呈上升趋势,但上升幅度不大,呈现小幅波动状态,其中GDP、工业用水量以及农业用水量对赣州市水资源承载力的具有显著影响;3)赣州市水资源较丰富,但由于时空分配不均,水资源配置体系也不够完善,且供水的基础设施比较薄弱,所以水资源的开发利用程度比较低.该研究结果可为当地水资源的可持续利用提供决策参考和依据.  相似文献   

20.
We calibrate and contrast the recent generalized multinomial logit model and the widely used latent class logit model approaches for studying heterogeneity in consumer purchases. We estimate the parameters of the models on panel data of household ketchup purchases, and find that the generalized multinomial logit model outperforms the best‐fitting latent class logit model in terms of the Bayesian information criterion. We compare the posterior estimates of coefficients for individual customers based on the two different models and discuss how the differences could affect marketing strategies (such as pricing), which could be affected by applying each of the models. We also describe extensions to the scale heterogeneity model that includes the effects of state dependence and purchase history. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号