首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 115 毫秒
1.
支持向量机中一种参数优化选取方法   总被引:1,自引:1,他引:0  
本文给出一种支持向量机中的参数优化选取方法. 它是通过遗传算法和确定性算法相结合解平衡约束优化问题,求出二分类支持向量机(SVM)中的正则参数C,本文将C作为优化问题中的变量来处理.遗传算法用来求解以C为变量的优化问题, 而确定性算法对每一个C值求解约束.数值计算的结果表明,用文中所述的方法求得的C值能明显提高支持向量机的泛化性能.  相似文献   

2.
音乐流派是区分和描述不同音乐的一种标签,借助数学和计算机的方法将大量音乐自动分为不同流派是目前国内外研究的热点问题之一.支持向量机(SVM)由于其具有严格的数学理论基础而被广泛应用于音乐流派自动分类.然而,支持向量机的惩罚参数和核参数对其分类效果具有重要影响.以交叉验证正确率作为适应值,采用人工蜂群(ABC)算法优化支持向量的控制参数.在音乐流派自动分类的仿真实验中,经ABC算法优化后的支持向量机取得的平均预测正确率为80.8000%(最优预测正确率达83%),高出默认参数SVM 18.8个百分点.与粒子群优化算法及遗传算法相比,仿真实验结果同样显示了ABC算法的优越性.  相似文献   

3.
针对混合核支持向量机(SVM)中的可调参数一般是根据经验或人工随机调试得到,不能确保参数最优的局限性,提出用粒子群和人工蜂群的并行混合优化(ABC-PSO)算法来优化混合核SVM参数,找出满足条件的最优参数组合.将该SVM模型应用到语音识别中,通过对三个不同语种的语音数据库的实验仿真,验证了混合算法优化SVM参数所得的优化SVM模型比PSO算法优化SVM所得的模型,具有良好的泛化能力和语音识别能力.  相似文献   

4.
给出了基于全部风险(ORM)最小化基础上的半监督支持向量机分类算法,该算法通过加入工作集进行训练,提高了标准SVM对训练集提供信息不充分的数据集的分类泛化能力,而且能有效地处理大量的无标示数据.并将凹半监督支持向量机算法应用于县域可持续发展综合实力评价中.通过邯郸15个县作实证分析,论证了该算法的可行性和有效性.  相似文献   

5.
随着人们创新水平的不断提高,为了更加准确的实现机器人的导航任务,提出了一种基于改进的粒子群优化支持向量机中的参数的方法.首先利用主成分分析法对数据进行降维,然后利用改进的粒子群优化算法,对SVM中的惩罚参数c和核函数的参数g进行优化,最后代入到SVM中,以此来达到运用SVM对机器人的导航任务进行分类识别.相对于其他算法,容易发现改进的粒子群优化算法优化后的支持向量机可以达到很好的效果.这种识别分类可以帮助人们很好的对机器人进行导航,对今后机器人的研究具有很大的应用价值.  相似文献   

6.
基于非侵入式电力负荷检测与分解技术近年来得到广泛推广.选取14个稳态指标作为负荷特征,建立基于支持向量机(SVM)的非侵入式负荷印记识别模型,利用多分类支持向量机(multi-class SVM)的成对分类算法,对负荷印记进行了识别,随机抽取数据进行测试,结果表明方法能够更准确地识别负荷印记,说明所提出的模型和方法具有较高的有效性和正确性.  相似文献   

7.
准确识别电子商务信用风险,有利于提高企业风险防范能力,减少损失.建立了基于粗糙集(RS)、遗传算法(GA)和支持向量机(SVM)的电子商务信用风险分类模型(RS-GA-SVM).首先,利用RS对分类指标进行约简,选择出电子商务信用风险关键影响因素.其次,采用GA算法优化SVM模型参数,并应于电子商务信用风险分类.最后,实证表明,RS-GA-SVM模型具有高的分类精度和分类效率.  相似文献   

8.
SVM(Support Vector Machine,支持向量机)分类算法是一种在高分辨率遥感图像分类中逐步得到重视的算法.首先详细介绍了SVM算法的数学原理,其次给出了基于SVM的高分辨率遥感图像分类方法的模型建立、光谱特征提取以及分类器设计,然后在AVIRIS标准多波段遥感数据集Indian Pines上进行了仿真,通过混淆矩阵和kappa系数评价了分类性能,最后以作者所在高校地区高分图像分类为例,验证了方法在高分辨率遥感图像地物分类上的有效性.  相似文献   

9.
基于Fuzzy理论的数据挖掘算法研究(Ⅰ)   总被引:1,自引:1,他引:0  
“数据挖掘”是数据处理的一个新领域.支持向量机是数据挖掘的一种新方法,该技术在很多领域得到了成功的应用.但是,支持向量机目前还存在许多局限,当支持向量机的训练集中含有模糊信息时,支持向量机将无能为力.为解决一般情况下支持向量机中含有模糊信息(模糊参数)问题,研究了模糊机会约束规划、模糊分类中的模糊特征及其表示方法,建立了模糊支持向量分类机理论,给出了模糊线性可分的模糊支持向量分类机算法.  相似文献   

10.
张剑  王波 《经济数学》2017,34(2):84-88
作为一种动态和非稳定时间序列,Shibor发展变化是随机波动的,难以准确预测Shibor的波动性.支持向量机(SVM)在回归预测非线性时间序列方面有很好地预测效果,SVM的预测精度和泛化能力的核心是参数的优化选择,分别用网格搜索法(Grid-Search)和粒子群(PSO)算法来优化SVM的参数c和g.从而将参数优化后的SVM非线性回归预测法与基于传统ARIMA时间序列预测结果进行对比分析.实验表明,优化后的SVM回归预测方法比ARIMA时间序列方法更精确,在实际中具有很大的应用价值.  相似文献   

11.
The support vector machine (SVM) represents a new and very promising technique for machine learning tasks involving classification, regression or novelty detection. Improvements of its generalization ability can be achieved by incorporating prior knowledge of the task at hand.We propose a new hybrid algorithm consisting of signal-adapted wavelet decompositions and hard margin SVMs for waveform classification. The adaptation of the wavelet decompositions is tailored for hard margin SV classifiers with radial basis functions as kernels. It allows the optimization of the representation of the data before training the SVM and does not suffer from computationally expensive validation techniques.We assess the performance of our algorithm against the background of current concerns in medical diagnostics, namely the classification of endocardial electrograms and the detection of otoacoustic emissions. Here the performance of hard margin SVMs can significantly be improved by our adapted preprocessing step.  相似文献   

12.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

13.
A new quadratic kernel-free non-linear support vector machine (which is called QSVM) is introduced. The SVM optimization problem can be stated as follows: Maximize the geometrical margin subject to all the training data with a functional margin greater than a constant. The functional margin is equal to W T X + b which is the equation of the hyper-plane used for linear separation. The geometrical margin is equal to . And the constant in this case is equal to one. To separate the data non-linearly, a dual optimization form and the Kernel trick must be used. In this paper, a quadratic decision function that is capable of separating non-linearly the data is used. The geometrical margin is proved to be equal to the inverse of the norm of the gradient of the decision function. The functional margin is the equation of the quadratic function. QSVM is proved to be put in a quadratic optimization setting. This setting does not require the use of a dual form or the use of the Kernel trick. Comparisons between the QSVM and the SVM using the Gaussian and the polynomial kernels on databases from the UCI repository are shown.  相似文献   

14.
Support vector machine (SVM) is a popular tool for machine learning task. It has been successfully applied in many fields, but the parameter optimization for SVM is an ongoing research issue. In this paper, to tune the parameters of SVM, one form of inter-cluster distance in the feature space is calculated for all the SVM classifiers of multi-class problems. Inter-cluster distance in the feature space shows the degree the classes are separated. A larger inter-cluster distance value implies a pair of more separated classes. For each classifier, the optimal kernel parameter which results in the largest inter-cluster distance is found. Then, a new continuous search interval of kernel parameter which covers the optimal kernel parameter of each class pair is determined. Self-adaptive differential evolution algorithm is used to search the optimal parameter combination in the continuous intervals of kernel parameter and penalty parameter. At last, the proposed method is applied to several real word datasets as well as fault diagnosis for rolling element bearings. The results show that it is both effective and computationally efficient for parameter optimization of multi-class SVM.  相似文献   

15.
During the last years, kernel based methods proved to be very successful for many real-world learning problems. One of the main reasons for this success is the efficiency on large data sets which is a result of the fact that kernel methods like support vector machines (SVM) are based on a convex optimization problem. Solving a new learning problem can now often be reduced to the choice of an appropriate kernel function and kernel parameters. However, it can be shown that even the most powerful kernel methods can still fail on quite simple data sets in cases where the inherent feature space induced by the used kernel function is not sufficient. In these cases, an explicit feature space transformation or detection of latent variables proved to be more successful. Since such an explicit feature construction is often not feasible for large data sets, the ultimate goal for efficient kernel learning would be the adaptive creation of new and appropriate kernel functions. It can, however, not be guaranteed that such a kernel function still leads to a convex optimization problem for Support Vector Machines. Therefore, we have to enhance the optimization core of the learning method itself before we can use it with arbitrary, i.e., non-positive semidefinite, kernel functions. This article motivates the usage of appropriate feature spaces and discusses the possible consequences leading to non-convex optimization problems. We will show that these new non-convex optimization SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of the generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions.  相似文献   

16.
We propose a new binary classification and variable selection technique especially designed for high-dimensional predictors. Among many predictors, typically, only a small fraction of them have significant impact on prediction. In such a situation, more interpretable models with better prediction accuracy can be obtained by variable selection along with classification. By adding an ?1-type penalty to the loss function, common classification methods such as logistic regression or support vector machines (SVM) can perform variable selection. Existing penalized SVM methods all attempt to jointly solve all the parameters involved in the penalization problem altogether. When data dimension is very high, the joint optimization problem is very complex and involves a lot of memory allocation. In this article, we propose a new penalized forward search technique that can reduce high-dimensional optimization problems to one-dimensional optimization by iterating the selection steps. The new algorithm can be regarded as a forward selection version of the penalized SVM and its variants. The advantage of optimizing in one dimension is that the location of the optimum solution can be obtained with intelligent search by exploiting convexity and a piecewise linear or quadratic structure of the criterion function. In each step, the predictor that is most able to predict the outcome is chosen in the model. The search is then repeatedly used in an iterative fashion until convergence occurs. Comparison of our new classification rule with ?1-SVM and other common methods show very promising performance, in that the proposed method leads to much leaner models without compromising misclassification rates, particularly for high-dimensional predictors.  相似文献   

17.
Support vector machine (SVM) has attracted considerable attentions recently due to its successful applications in various domains. However, by maximizing the margin of separation between the two classes in a binary classification problem, the SVM solutions often suffer two serious drawbacks. First, SVM separating hyperplane is usually very sensitive to training samples since it strongly depends on support vectors which are only a few points located on the wrong side of the corresponding margin boundaries. Second, the separating hyperplane is equidistant to the two classes which are considered equally important when optimizing the separating hyperplane location regardless the number of training data and their dispersions in each class. In this paper, we propose a new SVM solution, adjusted support vector machine (ASVM), based on a new loss function to adjust the SVM solution taking into account the sample sizes and dispersions of the two classes. Numerical experiments show that the ASVM outperforms conventional SVM, especially when the two classes have large differences in sample size and dispersion.  相似文献   

18.
We improve the twin support vector machine(TWSVM)to be a novel nonparallel hyperplanes classifier,termed as ITSVM(improved twin support vector machine),for binary classification.By introducing the diferent Lagrangian functions for the primal problems in the TWSVM,we get an improved dual formulation of TWSVM,then the resulted ITSVM algorithm overcomes the common drawbacks in the TWSVMs and inherits the essence of the standard SVMs.Firstly,ITSVM does not need to compute the large inverse matrices before training which is inevitable for the TWSVMs.Secondly,diferent from the TWSVMs,kernel trick can be applied directly to ITSVM for the nonlinear case,therefore nonlinear ITSVM is superior to nonlinear TWSVM theoretically.Thirdly,ITSVM can be solved efciently by the successive overrelaxation(SOR)technique or sequential minimization optimization(SMO)method,which makes it more suitable for large scale problems.We also prove that the standard SVM is the special case of ITSVM.Experimental results show the efciency of our method in both computation time and classification accuracy.  相似文献   

19.
基于GA-SVM的太原市空气质量指数预测   总被引:1,自引:0,他引:1  
针对大气环境的复杂多变性和不确定性,采用太原市2014年至2016年的空气污染物监测数据,分别将改进的粒子群算法(IPSO)和遗传算法(GA)与支持向量机(SVM)相结合,通过参数寻优构建新模型完成对空气质量指数(AQI)的预测.实验结果表明,GA-SVM在预测精度、误差率和可靠性方面均优于IPSO-SVM与SVM.因此GA-SVM模型更适用于AQI的预测,为大气污染防治提供了科学合理的理论依据和新的预测方法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号