首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 62 毫秒
1.
随着人们创新水平的不断提高,为了更加准确的实现机器人的导航任务,提出了一种基于改进的粒子群优化支持向量机中的参数的方法.首先利用主成分分析法对数据进行降维,然后利用改进的粒子群优化算法,对SVM中的惩罚参数c和核函数的参数g进行优化,最后代入到SVM中,以此来达到运用SVM对机器人的导航任务进行分类识别.相对于其他算法,容易发现改进的粒子群优化算法优化后的支持向量机可以达到很好的效果.这种识别分类可以帮助人们很好的对机器人进行导航,对今后机器人的研究具有很大的应用价值.  相似文献   

2.
针对单纯使用遗传算法处理大规模数据需要时间长和对计算机的内存等硬件要求较高的问题,将神经网络嵌入到遗传算法中构造出混合智能遗传算法用于SVM核函数的参数优化,数值试验结果表明该算法对SVM核参数优化是可行的、有效的,并能得到较好的SVM核参数组合和具有较高的分类准确率及较好的泛化能力.  相似文献   

3.
对广义凸损失函数和变高斯核情形下正则化学习算法的泛化性能展开研究.其目标是给出学习算法泛化误差的一个较为满意上界.泛化误差可以利用正则误差和样本误差来测定.基于高斯核的特性,通过构构建一个径向基函数(简记为RBF)神经网络,给出了正则误差的上界估计,通过投影算子和再生高斯核希尔伯特空间的覆盖数给出样本误差的上界估计.所获结果表明,通过适当选取参数σ和λ,可以提高学习算法的泛化性能.  相似文献   

4.
给出线性规划原始对偶内点算法的一个单变量指数型核函数.首先研究了这个指数型核函数的性质以及其对应的障碍函数.其次,基于这个指数型核函数,设计了求解线性规划问题的原始对偶内点算法,得到了目前小步算法最好的理论迭代界.最后,通过数值算例比较了基于指数型核函数的原始对偶内点算法和基于对数型核函数的原始对偶内点算法的计算效果.  相似文献   

5.
王泽兴 《数学杂志》2023,(3):229-246
LUMs(Large-margin Unified Machines)在分类学习中受到广泛关注,LUMs是一类最大化间隔分类器,它提供了一种独特的软分类到硬分类转化的方式.本文研究的是基于独立不同分布样本和LUM损失函数的二分类在线学习算法.同时,在线算法的每一步迭代,涉及的LUM损失函数的参数是随着迭代在逐渐减小的.在这种假设下,我们基于再生核希尔伯特空间(RKHS),给出了在线算法的收敛阶.  相似文献   

6.
为解决传统目标检测算法需要人工设定目标特征、使用滑动窗法判断目标可能区域耗时等问题,将基于区域推荐和深度卷积网络用于交通目标检测,直接从原始图像提取特征,免去了人工选取特征的环节;解决了滑动窗口法耗时的问题.首先采用Selective Search方法在源图像上生成大量的候选区域,以这些候选区域作为输入样本,训练深度卷积网络学习算法,自动进行特征提取,对每个候选区提取的特征采用SVM分类器进行分类,最后基于贪婪非极大值抑制方法精修候选框的位置.此算法通过matlab编程分别对单目标、多目标及多类交通目标进行检测实验,证明了所提方法的可行性和有效性.  相似文献   

7.
提出了一种基于人脸重要特征的人脸识别方法,首先选取人脸的重要特征并将其具体化,对得到的重要特征进行主成分分析,然后用支持向量机(Support Vector Machine,SVM)设计重要特征分类器来确定测试人脸图像中重要特征,同时设计支持向量机(SVM)人脸分类器,确定人脸图像的所属类别.对ORL人脸图像数据库进行仿真实验,结果表明,该方法要优于一般的基于整体特征的人脸识别方法并有较强的鲁棒性.  相似文献   

8.
对一类具有非线性滑动边界条件的Stokes问题,得到了求其数值解的自适应Uzawa块松弛算法(SUBRM).通过该问题导出的变分问题,引入辅助变量将原问题转化为一个基于增广Lagrange函数表示的鞍点问题,并采用Uzawa块松弛算法(UBRM)求解.为了提高算法性能,提出利用迭代函数自动选取合适罚参数的自适应法则.该算法的优点是每次迭代只需计算一个线性问题,同时显式计算辅助变量.对算法的收敛性进行了理论分析,最后用数值结果验证了该算法的可行性和有效性.  相似文献   

9.
针对烟草化学成分与卷烟制品香级之间确定的数学模型难以建立的问题.提出了一种基于萤火虫群优化算法的烟草香级集成分类方法.方法首先使用混合核SVM独立训练多个个体支持向量机,然后利用改进的离散型萤火虫群优化算法选择部分精度较高、差异度较大的个体分类器参与集成,最后通过多数投票法得到最终的分类预测结果.对比实验结果表明,算法在分类准确度上具有较大的优势,证明了算法的有效性·从而为烟草的香级分类提供了可靠依据.  相似文献   

10.
支持向量机(support vector machine(SVM))是一种数据挖掘中新型机器学习方法.提出了基于压缩凸包(compressed convex hull(CCH))的SVM分类问题的几何算法.对比简约凸包(reducedconvex hull(RCH)),CCH保持了数据的几何体形状,并且易于得到确定其极点的充要条件.作为CCH的实际应用,讨论了该几何算法的稀疏化方法及概率加速算法.数值试验结果表明所讨论的算法可降低核计算并取得较好的性能.  相似文献   

11.
Support vector machine (SVM) is a popular tool for machine learning task. It has been successfully applied in many fields, but the parameter optimization for SVM is an ongoing research issue. In this paper, to tune the parameters of SVM, one form of inter-cluster distance in the feature space is calculated for all the SVM classifiers of multi-class problems. Inter-cluster distance in the feature space shows the degree the classes are separated. A larger inter-cluster distance value implies a pair of more separated classes. For each classifier, the optimal kernel parameter which results in the largest inter-cluster distance is found. Then, a new continuous search interval of kernel parameter which covers the optimal kernel parameter of each class pair is determined. Self-adaptive differential evolution algorithm is used to search the optimal parameter combination in the continuous intervals of kernel parameter and penalty parameter. At last, the proposed method is applied to several real word datasets as well as fault diagnosis for rolling element bearings. The results show that it is both effective and computationally efficient for parameter optimization of multi-class SVM.  相似文献   

12.
在支持向量机预测建模中,核函数用来将低维特征空间中的非线性问题映射为高维特征空间中的线性问题.核函数的特征对于支持向量机的学习和预测都有很重要的影响.考虑到两种典型核函数—全局核(多项式核函数)和局部核(RBF核函数)在拟合与泛化方面的特性,采用了一种基于混合核函数的支持向量机方法用于预测建模.为了评价不同核函数的建模效果、得到更好的预测性能,采用遗传算法自适应进化支持向量机模型的各项参数,并将其应用于装备费用预测的实际问题中.实际计算表明采用混合核函数的支持向量机较单一核函数时有更好的预测性能,可以作为一种有效的预测建模方法在装备管理中推广应用.  相似文献   

13.
基于高斯RBF核支持向量机预测棉花商品期货主力和次主力合约协整关系的价差序列,确定最优SVM参数,并选择合适的开平仓阈值,进行同品种跨期套利.再与多项式核支持向量机套利结果对比,得到在所有开平仓阈值上,基于高斯RBF核支持向量机套利的收益率都明显高于多项式核支持向量机套利的收益率.  相似文献   

14.
The kernel‐based statistical semantic topic model is introduced for comprehending three species of internationally important Ramsar wetland documents describing the Lashi Lake wetland in the Yunnan Province, the Yancheng wetland in the Jiangsu Province, and the Zoige wetland in the Sichuan Province of China. Latent Dirichlet allocation (LDA) features are used to represent the semantic components of wetland documents. Kernel principal component analysis (KPCA) maps the topic components to the kernel space to attain the low dimensional principal components. Support vector machines (SVMs) are used to comprehend the semantic distribution of distinct wetland documents in the kernel space. The LDA+KPCA+SVM algorithm reaches 77.0% training and 75.9% test accuracy and 0.902 training and 0.840 test mean average precision scores in the application of comprehending the wetland documents, respectively. The performance of the proposed kernel‐based model is superior to the traditional models of LDA+SVM and LDA+PCA+SVM.  相似文献   

15.
During the last years, kernel based methods proved to be very successful for many real-world learning problems. One of the main reasons for this success is the efficiency on large data sets which is a result of the fact that kernel methods like support vector machines (SVM) are based on a convex optimization problem. Solving a new learning problem can now often be reduced to the choice of an appropriate kernel function and kernel parameters. However, it can be shown that even the most powerful kernel methods can still fail on quite simple data sets in cases where the inherent feature space induced by the used kernel function is not sufficient. In these cases, an explicit feature space transformation or detection of latent variables proved to be more successful. Since such an explicit feature construction is often not feasible for large data sets, the ultimate goal for efficient kernel learning would be the adaptive creation of new and appropriate kernel functions. It can, however, not be guaranteed that such a kernel function still leads to a convex optimization problem for Support Vector Machines. Therefore, we have to enhance the optimization core of the learning method itself before we can use it with arbitrary, i.e., non-positive semidefinite, kernel functions. This article motivates the usage of appropriate feature spaces and discusses the possible consequences leading to non-convex optimization problems. We will show that these new non-convex optimization SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of the generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions.  相似文献   

16.
Kernel logistic regression (KLR) is a powerful nonlinear classifier. The combination of KLR and the truncated-regularized iteratively re-weighted least-squares (TR-IRLS) algorithm, has led to a powerful classification method using small-to-medium size data sets. This method (algorithm), is called truncated-regularized kernel logistic regression (TR-KLR). Compared to support vector machines (SVM) and TR-IRLS on twelve benchmark publicly available data sets, the proposed TR-KLR algorithm is as accurate as, and much faster than, SVM and more accurate than TR-IRLS. The TR-KLR algorithm also has the advantage of providing direct prediction probabilities.  相似文献   

17.
本文提出了基于支持向量回归机(SVR)的一种新分类算法.它和标准的支持向量机(SVM)不同:标准的支持向量机(SVM)采用固定的模度量间隔且最优化问题与参数有关.本文中我们可以用任意模度量间隔,得到的最优化问题是无参数的线性规划问题,避免了参数选择.数值试验表明了该算法的有效性.  相似文献   

18.
Identification and recognition of specific functionally-important DNA sequence fragments such as regulatory sequences are considered the most important problems in bioinformatics. One type of such fragments are promoters, i.e., short regulatory DNA sequences located upstream of a gene. Detection of regulatory DNA sequences is important for successful gene prediction and gene expression studies. In this paper, Support Vector Machine (SVM) is used for classification of DNA sequences and recognition of the regulatory sequences. For optimal classification, various SVM learning and kernel parameters (hyperparameters) and their optimization methods are analyzed. In a case study, optimization of the SVM hyperparameters for linear, polynomial and power series kernels is performed using a modification of the Nelder–Mead (downhill simplex) algorithm. The method allows for improving the precision of identification of the regulatory DNA sequences. The results of promoter recognition for the drosophila sequence datasets are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号