共查询到20条相似文献,搜索用时 31 毫秒
1.
传统的面向支持向量回归的一次性建模算法中样本增加时,均需从头开始学习,而增量式算法可以充分利用上一阶段的学习成果。SVR的增量算法通常基于ε-不敏感损失函数,该损失函数对大的异常值比较敏感,而Huber损失函数对异常值敏感度低。所以在有噪声的情况下,Huber损失函数是比ε-不敏感损失函数更好的选择,在现实情况当中。基于此,本文提出了一种基于Huber损失函数的增量式Huber-SVR算法,该算法能够持续地将新样本信息集成到已经构建好的模型中,而不是重新建模。与增量式ε-SVR算法和增量式RBF算法相比,在对真实数据进行预测建模时,增量式Huber-SVR算法具有更高的预测精度。 相似文献
2.
3.
《数学的实践与认识》2015,(17)
银行信用卡业务属于高收益、高风险的业务,如何实现对信用卡的客户流失控制是发卡银行迫切需要解决的问题.目前,随着银行积累了大量的数据,并建立了数据仓库,使得采用数据挖掘技术来实现信用卡客户流失分析成为了可能.利用双子支持向量机,基于某商业银行的信用卡数据,建立了信用卡流失分析模型,实验结果证明了方法的有效性. 相似文献
4.
在支持向量机预测建模中,核函数用来将低维特征空间中的非线性问题映射为高维特征空间中的线性问题.核函数的特征对于支持向量机的学习和预测都有很重要的影响.考虑到两种典型核函数—全局核(多项式核函数)和局部核(RBF核函数)在拟合与泛化方面的特性,采用了一种基于混合核函数的支持向量机方法用于预测建模.为了评价不同核函数的建模效果、得到更好的预测性能,采用遗传算法自适应进化支持向量机模型的各项参数,并将其应用于装备费用预测的实际问题中.实际计算表明采用混合核函数的支持向量机较单一核函数时有更好的预测性能,可以作为一种有效的预测建模方法在装备管理中推广应用. 相似文献
5.
6.
《数学的实践与认识》2019,(20)
基于非侵入式电力负荷检测与分解技术近年来得到广泛推广.选取14个稳态指标作为负荷特征,建立基于支持向量机(SVM)的非侵入式负荷印记识别模型,利用多分类支持向量机(multi-class SVM)的成对分类算法,对负荷印记进行了识别,随机抽取数据进行测试,结果表明方法能够更准确地识别负荷印记,说明所提出的模型和方法具有较高的有效性和正确性. 相似文献
7.
基于支持向量机的磨粒识别 总被引:1,自引:0,他引:1
由于神经网络的局限性,上个世纪末,支持向量机被提出和发展,它在模式识别方面有广泛的应用发展前途,并由最初的二元分类发展到现在的多元分类.本文根据支持向量机的最新发展,把最小二乘支持向量机应用在磨粒识别上,并取得了好的结果. 相似文献
8.
针对结构可靠性分析中功能函数不能显式表达的问题,将支持向量机方法引入到结构可靠性分析中.支持向量机是一种实现了结构风险最小化原则的分类技术,它具有出色的小样本学习性能和良好的泛化性能,因此提出了两种基于支持向量机的结构可靠性分析方法.与传统的响应面法和神经网络法相比,支持向量机可靠性分析方法的显著特点是在小样本下高精度地逼近函数,并且可以避免维数灾难.算例结果也充分表明支持向量机方法可以在抽样范围内很好地逼近真实的功能函数,减少隐式功能函数分析(通常是有限元分析)的次数,具有一定的工程实用价值. 相似文献
9.
针对英文情感分类问题,对不同样本采用不同权重,通过引入模糊隶属度函数,通过计算样本模糊隶属度确定样本隶属某一类程度的模糊支持向量机分类算法,通过对比选取不同核函数和不同惩罚系数的结果.仿真实验结果表明应用模糊支持向量机进行英文情感分类具有较好的分类能力和较高的识别能力. 相似文献
10.
自V apn ik于20世纪90年代末提出推理型支持向量机的概念后,关于推理型支持向量机的研究基本处于停止状态,主要问题是这种支持向量机的优化模型求解有相当的困难.文章试图把它的优化问题变为无约束问题,再构造带有核的光滑无约束最优化问题,由此构建最优化问题易于求解的推理型支持向量机,以突破对它深入研究的瓶颈. 相似文献
11.
12.
本研究深度挖掘了财经新闻主题内容与股市市场的相关性,并提出了一种基于理解当日新闻主题分布来分析中国股市涨跌的预测模型。具体来说,我们使用自动文本分析技术与机器学习技术,首先通过概率主题模型对财经新闻文档进行聚类得到其中的主题分布,再结合实际股票市场的交易数据分析其与市场之间的关联程度,最后引入支持向量机算法对股市走势进行预测。实验部分我们抽取了近三个月的新闻数据与沪深股市数据进行分析,结果表明:新闻中国际贸易以及城市化相关主题与股市变动关系密切,通过本文提出的算法能较准确得预测当日股市涨跌,而建立在其上的股指期货策略也取得了很好的效果。 相似文献
13.
Multicategory Classification by Support Vector Machines 总被引:8,自引:0,他引:8
Erin J. Bredensteiner Kristin P. Bennett 《Computational Optimization and Applications》1999,12(1-3):53-79
We examine the problem of how to discriminate between objects of three or more classes. Specifically, we investigate how two-class discrimination methods can be extended to the multiclass case. We show how the linear programming (LP) approaches based on the work of Mangasarian and quadratic programming (QP) approaches based on Vapnik's Support Vector Machine (SVM) can be combined to yield two new approaches to the multiclass problem. In LP multiclass discrimination, a single linear program is used to construct a piecewise-linear classification function. In our proposed multiclass SVM method, a single quadratic program is used to construct a piecewise-nonlinear classification function. Each piece of this function can take the form of a polynomial, a radial basis function, or even a neural network. For the k > 2-class problems, the SVM method as originally proposed required the construction of a two-class SVM to separate each class from the remaining classes. Similarily, k two-class linear programs can be used for the multiclass problem. We performed an empirical study of the original LP method, the proposed k LP method, the proposed single QP method and the original k QP methods. We discuss the advantages and disadvantages of each approach. 相似文献
14.
This paper investigates the approximation of multivariate functions from data via linear combinations of translates of a positive
definite kernel from a reproducing kernel Hilbert space. If standard interpolation conditions are relaxed by Chebyshev-type
constraints, one can minimize the norm of the approximant in the Hilbert space under these constraints. By standard arguments
of optimization theory, the solutions will take a simple form, based on the data related to the active constraints, called
support vectors in the context of machine learning. The corresponding quadratic programming problems are investigated to some
extent. Using monotonicity results concerning the Hilbert space norm, iterative techniques based on small quadratic subproblems
on active sets are shown to be finite, even if they drop part of their previous information and even if they are used for
infinite data, e.g., in the context of online learning. Numerical experiments confirm the theoretical results.
Dedicated to C.A. Micchelli at the occasion of his 60th birthday
Mathematics subject classifications (2000) 65D05, 65D10, 41A15, 41A17, 41A27, 41A30, 41A40, 41A63. 相似文献
15.
AndreasChristmann 《应用数学学报(英文版)》2005,21(2):193-208
The goals of this paper are twofold: we describe common features in data sets from motor vehicle insurance companies and we investigate a general strategy which exploits the knowledge of such features. The results of the strategy are a basis to develop insurance tariffs. We use a nonparametric approach based on a combination of kernel logistic regression and ε-support vector regression which both have good robustness properties. The strategy is applied to a data set from motor vehicle insurance companies. 相似文献
16.
We improve the twin support vector machine(TWSVM)to be a novel nonparallel hyperplanes classifier,termed as ITSVM(improved twin support vector machine),for binary classification.By introducing the diferent Lagrangian functions for the primal problems in the TWSVM,we get an improved dual formulation of TWSVM,then the resulted ITSVM algorithm overcomes the common drawbacks in the TWSVMs and inherits the essence of the standard SVMs.Firstly,ITSVM does not need to compute the large inverse matrices before training which is inevitable for the TWSVMs.Secondly,diferent from the TWSVMs,kernel trick can be applied directly to ITSVM for the nonlinear case,therefore nonlinear ITSVM is superior to nonlinear TWSVM theoretically.Thirdly,ITSVM can be solved efciently by the successive overrelaxation(SOR)technique or sequential minimization optimization(SMO)method,which makes it more suitable for large scale problems.We also prove that the standard SVM is the special case of ITSVM.Experimental results show the efciency of our method in both computation time and classification accuracy. 相似文献
17.
Emilio Carrizosa 《TOP》2006,14(2):399-424
A key problem in Multiple-Criteria Decision Making is how to measure the importance of the different criteria when just a partial preference relation among actions is given. In this note we address the problem of constructing a linear score function (and thus how to associate weights of importance to the criteria) when a binary relation comparing actions and partial information (relative importance) on the criteria are given. It is shown that these tasks can be done viaSupport Vector Machines, an increasingly popular Data Mining technique, which reduces the search of the weights to the resolution of (a series of) nonlinear convex optimization problems with linear constraints. An interactive method is then presented and illustrated by solving a multiple-objective 0–1 knapsack problem. Extensions to the case in which data are imprecise (given by intervals) or intransitivities in strict preferences exist are outlined. 相似文献
18.
The paper is related to the error analysis of Multicategory Support Vector Machine (MSVM) classifiers based on reproducing kernel Hilbert spaces. We choose the polynomial kernel as Mercer kernel and give the error estimate with De La Vallée Poussin means. We also introduce the standard estimation of sample error, and derive the explicit learning rate. 相似文献
19.
基于高斯RBF核支持向量机预测棉花商品期货主力和次主力合约协整关系的价差序列,确定最优SVM参数,并选择合适的开平仓阈值,进行同品种跨期套利.再与多项式核支持向量机套利结果对比,得到在所有开平仓阈值上,基于高斯RBF核支持向量机套利的收益率都明显高于多项式核支持向量机套利的收益率. 相似文献
20.
将保留样本局部信息较好的三种核函数作为盖根鲍尔核的权函数,得到了三种修正的盖根鲍尔核.结合支持向量机建立分类模型,对10份UCI数据集的分类效果进行综合研究.结果表明所有修正核均比高斯核和线性核推广效果好,其中拉普拉斯修正性能更显著. 相似文献