共查询到20条相似文献,搜索用时 127 毫秒
1.
2.
基于支持向量机的磨粒识别 总被引:1,自引:0,他引:1
由于神经网络的局限性,上个世纪末,支持向量机被提出和发展,它在模式识别方面有广泛的应用发展前途,并由最初的二元分类发展到现在的多元分类.本文根据支持向量机的最新发展,把最小二乘支持向量机应用在磨粒识别上,并取得了好的结果. 相似文献
3.
Rn中连续算子的逼近问题的数值方法,一直是计算科学中研究的热点。本文引进了新兴的智能机器一支持向量机,以解决Rn中连续算子的逼近问题。在给出支持向量机用于算子逼近问题的详细数学表示之后,我们提出了分块逼近的算法,并通过具体的实例说明支持向量机在算子逼近问题中的有效性与优越性。 相似文献
4.
基于支持向量机的飞行事故率预测模型 总被引:1,自引:0,他引:1
飞行事故率是表征飞行安全水平的重要指标,其预测是典型的小样本问题.针对目前飞行事故率预测中存在的预测精度不高的问题,提出了一种基于回归支持向量机的飞行事故率预测建模方法.最后结合实际算例,采用SVR进行了飞行事故率预测建模并把预测结果与灰色预测和灰色马尔柯夫链预测进行了对比.仿真结果表明SVR具有很高的建模精度和泛化能力,从而验证了采用SVR进行航空飞行事故率预测的合理性和先进性. 相似文献
5.
非平行支持向量机是支持向量机的延伸,受到了广泛的关注.非平行支持向量机构造允许非平行的支撑超平面,可以描述不同类别之间的数据分布差异,从而适用于更广泛的问题.然而,对非平行支持向量机模型与支持向量机模型之间的关系研究较少,且尚未有等价于标准支持向量机模型的非平行支持向量机模型.从支持向量机出发,构造出新的非平行支持向量机模型,该模型不仅可以退化为标准支持向量机,保留了支持向量机的稀疏性和核函数可扩展性.同时,可以描述不同类别之间的数据分布差异,适用于更广泛的非平行结构数据等.最后,通过实验初步验证了所提模型的有效性. 相似文献
6.
《数学的实践与认识》2015,(17)
银行信用卡业务属于高收益、高风险的业务,如何实现对信用卡的客户流失控制是发卡银行迫切需要解决的问题.目前,随着银行积累了大量的数据,并建立了数据仓库,使得采用数据挖掘技术来实现信用卡客户流失分析成为了可能.利用双子支持向量机,基于某商业银行的信用卡数据,建立了信用卡流失分析模型,实验结果证明了方法的有效性. 相似文献
7.
8.
针对神经元的空间几何形态特征分类问题以及神经元的生长预测问题进行了探讨.结合神经元的形态数据,分别建立了基于支持向量机的神经元形态分类模型、基于主成分分析和支持向量机的神经元分类模型以及基于遗传算法和RBF网络的神经元生长预测模型,在较合理的假设下,对各个模型进行求解,得到了较理想的结果. 相似文献
9.
支持向量机方法与模糊系统 总被引:11,自引:1,他引:11
概括介绍了近年来倍受瞩目的一种新的计算机学习方法——支持向量机(Support Vector Machines,简称SVM)方法,这一方法具有坚实的理论基础和出色的应用效果;并分析了SVM方法与模糊系统的关系,对这两种方法的交互促进和发展提出了看法。 相似文献
10.
应用支持向量机(SVM)的算法进行中国大豆产量的预测研究,用1991-2008年中国大豆数据组成样本集,建立影响因素与大豆产量之间的SVM模型.利用SVM对输入和输出数据进行训练学习,逼近历史数据所隐含的函数关系,完成对新数据序列的映射关系,从而完成对未来年份大豆的预测,并与其它几种方法的预测效果进行比较.结果表明,SVM预测模型预测大豆产量的精度优于其它预测方法. 相似文献
11.
Knowledge based proximal support vector machines 总被引:1,自引:0,他引:1
We propose a proximal version of the knowledge based support vector machine formulation, termed as knowledge based proximal support vector machines (KBPSVMs) in the sequel, for binary data classification. The KBPSVM classifier incorporates prior knowledge in the form of multiple polyhedral sets, and determines two parallel planes that are kept as distant from each other as possible. The proposed algorithm is simple and fast as no quadratic programming solver needs to be employed. Effectively, only the solution of a structured system of linear equations is needed. 相似文献
12.
In this paper, we propose a robust L1-norm non-parallel proximal support vector machine (L1-NPSVM), which aims at giving a robust performance for binary classification in contrast to GEPSVM, especially for the problem with outliers. There are three mainly properties of the proposed L1-NPSVM. Firstly, different from the traditional GEPSVM which solves two generalized eigenvalue problems, our L1-NPSVM solves a pair of L1-norm optimal problems by using a simple justifiable iterative technique. Secondly, by introducing the L1-norm, our L1-NPSVM is more robust to outliers than GEPSVM to a great extent. Thirdly, compared with GEPSVM, no parameters need to be regularized in our L1-NPSVM. The effectiveness of the proposed method is demonstrated by tests on a simple artificial example as well as on some UCI datasets, which shows the improvements of GEPSVM. 相似文献
13.
Yu. V. Goncharov I. B. Muchnik L. V. Shvartser 《Computational Mathematics and Mathematical Physics》2008,48(7):1243-1260
An algorithm for selecting features in the classification learning problem is considered. The algorithm is based on a modification
of the standard criterion used in the support vector machine method. The new criterion adds to the standard criterion a penalty
function that depends on the selected features. The solution of the problem is reduced to finding the minimax of a convex-concave
function. As a result, the initial set of features is decomposed into three classes—unconditionally selected, weighted selected,
and eliminated features.
Original Russian Text Yu.V. Goncharov, I.B. Muchnik, L.V. Shvartser @, 2008, published in Zhurnal Vychislitel’noi Matematiki
i Matematicheskoi Fiziki, 2008, Vol. 48, No. 7, pp. 1318–1336. 相似文献
14.
Takeshi Asada Yeboon Yun Hirotaka Nakayama Tetsuzo Tanino 《Computational Management Science》2004,1(3-4):211-230
Support Vector Machines (SVMs) are now very popular as a powerful method in pattern classification problems. One of main features of SVMs is to produce a separating hyperplane which maximizes the margin in feature space induced by nonlinear mapping using kernel function. As a result, SVMs can treat not only linear separation but also nonlinear separation. While the soft margin method of SVMs considers only the distance between separating hyperplane and misclassified data, we propose in this paper multi-objective programming formulation considering surplus variables. A similar formulation was extensively researched in linear discriminant analysis mostly in 1980s by using Goal Programming(GP). This paper compares these conventional methods such as SVMs and GP with our proposed formulation through several examples.Received: September 2003, Revised: December 2003, 相似文献
15.
Method In this paper, we introduce a bi-level optimization formulation for the model and feature selection problems of support vector
machines (SVMs). A bi-level optimization model is proposed to select the best model, where the standard convex quadratic optimization
problem of the SVM training is cast as a subproblem.
Feasibility The optimal objective value of the quadratic problem of SVMs is minimized over a feasible range of the kernel parameters at
the master level of the bi-level model. Since the optimal objective value of the subproblem is a continuous function of the
kernel parameters, through implicity defined over a certain region, the solution of this bi-level problem always exists. The
problem of feature selection can be handled in a similar manner.
Experiments and results Two approaches for solving the bi-level problem of model and feature selection are considered as well. Experimental results
show that the bi-level formulation provides a plausible tool for model selection. 相似文献
16.
Optimal kernel selection in twin support vector machines 总被引:2,自引:0,他引:2
In twin support vector machines (TWSVMs), we determine pair of non-parallel planes by solving two related SVM-type problems,
each of which is smaller than the one in a conventional SVM. However, similar to other classification methods, the performance
of the TWSVM classifier depends on the choice of the kernel. In this paper we treat the kernel selection problem for TWSVM
as an optimization problem over the convex set of finitely many basic kernels, and formulate the same as an iterative alternating
optimization problem. The efficacy of the proposed classification algorithm is demonstrated with some UCI machine learning
benchmark datasets. 相似文献
17.
In this paper, we give several results of learning errors for linear programming support vector regression. The corresponding theorems are proved in the reproducing kernel Hilbert space. With the covering number, the approximation property and the capacity of the reproducing kernel Hilbert space are measured. The obtained result (Theorem 2.1) shows that the learning error can be controlled by the sample error and regularization error. The mentioned sample error is summarized by the errors of learning regression function and regularizing function in the reproducing kernel Hilbert space. After estimating the generalization error of learning regression function (Theorem 2.2), the upper bound (Theorem 2.3) of the regularized learning algorithm associated with linear programming support vector regression is estimated. 相似文献
18.
A convergent decomposition algorithm for support vector machines 总被引:1,自引:0,他引:1
S. Lucidi L. Palagi A. Risi M. Sciandrone 《Computational Optimization and Applications》2007,38(2):217-234
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In
particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods
cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems
with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental
role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass
of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems,
have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous
methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the
objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be
suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence
of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results
on support vector classification problems with up to 100 thousands variables. 相似文献
19.
Nguyen Thai An 《Optimization》2017,66(1):129-147
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property. 相似文献
20.
Satoru Ibaraki Masao Fukushima Toshihide Ibaraki 《Computational Optimization and Applications》1992,1(2):207-226
A primal-dual version of the proximal point algorithm is developed for linearly constrained convex programming problems. The algorithm is an iterative method to find a saddle point of the Lagrangian of the problem. At each iteration of the algorithm, we compute an approximate saddle point of the Lagrangian function augmented by quadratic proximal terms of both primal and dual variables. Specifically, we first minimize the function with respect to the primal variables and then approximately maximize the resulting function of the dual variables. The merit of this approach exists in the fact that the latter function is differentiable and the maximization of this function is subject to no constraints. We discuss convergence properties of the algorithm and report some numerical results for network flow problems with separable quadratic costs. 相似文献