共查询到20条相似文献,搜索用时 15 毫秒
1.
Support Vector Machines (SVMs) is known to be a powerful nonparametric classification technique even for high-dimensional data. Although predictive ability is important, obtaining an easy-to-interpret classifier is also crucial in many applications. Linear SVM provides a classifier based on a linear score. In the case of functional data, the coefficient function that defines such linear score usually has many irregular oscillations, making it difficult to interpret. 相似文献
2.
非平行支持向量机是支持向量机的延伸,受到了广泛的关注.非平行支持向量机构造允许非平行的支撑超平面,可以描述不同类别之间的数据分布差异,从而适用于更广泛的问题.然而,对非平行支持向量机模型与支持向量机模型之间的关系研究较少,且尚未有等价于标准支持向量机模型的非平行支持向量机模型.从支持向量机出发,构造出新的非平行支持向量机模型,该模型不仅可以退化为标准支持向量机,保留了支持向量机的稀疏性和核函数可扩展性.同时,可以描述不同类别之间的数据分布差异,适用于更广泛的非平行结构数据等.最后,通过实验初步验证了所提模型的有效性. 相似文献
3.
Supervised classification is an important part of corporate data mining to support decision making in customer-centric planning tasks. The paper proposes a hierarchical reference model for support vector machine based classification within this discipline. The approach balances the conflicting goals of transparent yet accurate models and compares favourably to alternative classifiers in a large-scale empirical evaluation in real-world customer relationship management applications. Recent advances in support vector machine oriented research are incorporated to approach feature, instance and model selection in a unified framework. 相似文献
4.
5.
This paper describes the relationship between support vector regression (SVR) and rough (or interval) patterns. SVR is the prediction component of the support vector techniques. Rough patterns are based on the notion of rough values, which consist of upper and lower bounds, and are used to effectively represent a range of variable values. Predictions of rough values in a variety of different forms within the context of interval algebra and fuzzy theory are attracting research interest. An extension of SVR, called rough support vector regression (RSVR), is proposed to improve the modeling of rough patterns. In particular, it is argued that the upper and lower bounds should be modeled separately. The proposal is shown to be a more flexible version of lower possibilistic regression model using ?-insensitivity. Experimental results on the Dow Jones Industrial Average demonstrate the suggested RSVR modeling technique. 相似文献
6.
Support vector machines can be posed as quadratic programming problems in a variety of ways. This paper investigates a formulation using the two-norm for the misclassification error that leads to a positive definite quadratic program with a single equality constraint under a duality construction. The quadratic term is a small rank update to a diagonal matrix with positive entries. The optimality conditions of the quadratic program are reformulated as a semismooth system of equations using the Fischer-Burmeister function and a damped Newton method is applied to solve the resulting problem. The algorithm is shown to converge from any starting point with a Q-quadratic rate of convergence. At each iteration, the Sherman-Morrison-Woodbury update formula is used to solve the key linear system. Results for a large problem with 60 million observations are presented demonstrating the scalability of the proposed method on a personal computer. Significant computational savings are realized as the inactive variables are identified and exploited during the solution process. Further results on a small problem separated by a nonlinear surface are given showing the gains in performance that can be made from restarting the algorithm as the data evolves.Accepted: December 8, 2003This work partially supported by NSF grant number CCR-9972372; AFOSR grant number F49620-01-1-0040; the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38; and Microsoft Corporation. 相似文献
7.
We improve the twin support vector machine(TWSVM)to be a novel nonparallel hyperplanes classifier,termed as ITSVM(improved twin support vector machine),for binary classification.By introducing the diferent Lagrangian functions for the primal problems in the TWSVM,we get an improved dual formulation of TWSVM,then the resulted ITSVM algorithm overcomes the common drawbacks in the TWSVMs and inherits the essence of the standard SVMs.Firstly,ITSVM does not need to compute the large inverse matrices before training which is inevitable for the TWSVMs.Secondly,diferent from the TWSVMs,kernel trick can be applied directly to ITSVM for the nonlinear case,therefore nonlinear ITSVM is superior to nonlinear TWSVM theoretically.Thirdly,ITSVM can be solved efciently by the successive overrelaxation(SOR)technique or sequential minimization optimization(SMO)method,which makes it more suitable for large scale problems.We also prove that the standard SVM is the special case of ITSVM.Experimental results show the efciency of our method in both computation time and classification accuracy. 相似文献
8.
Data-extrapolating (extension) technique has important applications in image processing on implicit surfaces and in level set methods. The existing data-extrapolating techniques are inefficient because they are designed without concerning the specialities of the extrapolating equations. Besides, there exists little work on locating the narrow band after data extrapolating—a very important problem in narrow band level set methods. In this paper, we put forward the general Huygens’ principle, and based on the principle we present two efficient data-extrapolating algorithms. The algorithms can easily locate the narrow band in data extrapolating. Furthermore, we propose a prediction–correction version for the data-extrapolating algorithms and the corresponding band locating method for a special case where the direct band locating method is hard to apply. Experiments demonstrate the efficiency of our algorithms and the convenience of the band locating method. 相似文献
9.
The existing support vector machines (SVMs) are all assumed that all the features of training samples have equal contributions
to construct the optimal separating hyperplane. However, for a certain real-world data set, some features of it may possess
more relevances to the classification information, while others may have less relevances. In this paper, the linear feature-weighted
support vector machine (LFWSVM) is proposed to deal with the problem. Two phases are employed to construct the proposed model.
First, the mutual information (MI) based approach is used to assign appropriate weights for each feature of the whole given
data set. Second, the proposed model is trained by the samples with their features weighted by the obtained feature weight
vector. Meanwhile, the feature weights are embedded in the quadratic programming through detailed theoretical deduction to
obtain the dual solution to the original optimization problem. Although the calculation of feature weights may add an extra
computational cost, the proposed model generally exhibits better generalization performance over the traditional support vector
machine (SVM) with linear kernel function. Experimental results upon one synthetic data set and several benchmark data sets
confirm the benefits in using the proposed method. Moreover, it is also shown in experiments that the proposed MI based approach
to determining feature weights is superior to the other two mostly used methods. 相似文献
10.
Monatshefte für Mathematik - Swimmers caught in a rip current flowing away from the shore are advised to swim orthogonally to the current to escape it. We describe a mathematical principle in... 相似文献
11.
The availability of abundant data posts a challenge to integrate static customer data and longitudinal behavioral data to improve performance in customer churn prediction. Usually, longitudinal behavioral data are transformed into static data before being included in a prediction model. In this study, a framework with ensemble techniques is presented for customer churn prediction directly using longitudinal behavioral data. A novel approach called the hierarchical multiple kernel support vector machine (H-MK-SVM) is formulated. A three phase training algorithm for the H-MK-SVM is developed, implemented and tested. The H-MK-SVM constructs a classification function by estimating the coefficients of both static and longitudinal behavioral variables in the training process without transformation of the longitudinal behavioral data. The training process of the H-MK-SVM is also a feature selection and time subsequence selection process because the sparse non-zero coefficients correspond to the variables selected. Computational experiments using three real-world databases were conducted. Computational results using multiple criteria measuring performance show that the H-MK-SVM directly using longitudinal behavioral data performs better than currently available classifiers. 相似文献
12.
Issam Dagher 《Journal of Global Optimization》2008,41(1):15-30
A new quadratic kernel-free non-linear support vector machine (which is called QSVM) is introduced. The SVM optimization problem
can be stated as follows: Maximize the geometrical margin subject to all the training data with a functional margin greater
than a constant. The functional margin is equal to W
T
X + b which is the equation of the hyper-plane used for linear separation. The geometrical margin is equal to . And the constant in this case is equal to one. To separate the data non-linearly, a dual optimization form and the Kernel
trick must be used. In this paper, a quadratic decision function that is capable of separating non-linearly the data is used.
The geometrical margin is proved to be equal to the inverse of the norm of the gradient of the decision function. The functional
margin is the equation of the quadratic function. QSVM is proved to be put in a quadratic optimization setting. This setting
does not require the use of a dual form or the use of the Kernel trick. Comparisons between the QSVM and the SVM using the
Gaussian and the polynomial kernels on databases from the UCI repository are shown. 相似文献
13.
针对同一对象从不同途径或不同层面获得的特征数据被称为多视角数据. 多视角学习是利用事物的多视角数据进行建模求解的一种新的机器学习方法. 大量研究表明, 多视角数据共同学习可以显著提高模型的学习效果, 因此许多相关模型及算法被提出. 多视角学习一般需遵循一 致性原则和互补性原则. 基于一致性原则,Farquhar 等人成功地将支持向量机(Support Vector Machine, SVM)和核典型相关分析(Kernel Canonical Correlation Analysis, KCCA)整合成一个单独的优化问题, 提出SVM-2K模型. 但是, SVM-2K模型并未充分利用多视角数据间的互补信息. 因此, 在SVM-2K模型的基础之上, 提出了基于间隔迁移的多视角支持向量机模型(Margin transfer-based multi-view support vector machine, M^2SVM), 该模型同时满足多视角学习的一致性和互补 性两原则. 进一步地, 从一致性的角度对其进行理论分析, 并 与SVM-2K比较, 揭示了 M^2SVM 比SVM-2K 更为灵活. 最后, 在大量的多视角数据集上验证了M^2SVM模型的有效性. 相似文献
14.
Veronica Piccialli Marco Sciandrone 《4OR: A Quarterly Journal of Operations Research》2018,16(2):111-149
Support Vector Machine (SVM) is one of the most important class of machine learning models and algorithms, and has been successfully applied in various fields. Nonlinear optimization plays a crucial role in SVM methodology, both in defining the machine learning models and in designing convergent and efficient algorithms for large-scale training problems. In this paper we present the convex programming problems underlying SVM focusing on supervised binary classification. We analyze the most important and used optimization methods for SVM training problems, and we discuss how the properties of these problems can be incorporated in designing useful algorithms. 相似文献
15.
In this paper we propose a new nonparametric regression method called composite support vector quantile regression (CSVQR) that combines the formulations of support vector regression and composite quantile regression. First the CSVQR using the quadratic programming (QP) is proposed and then the CSVQR utilizing the iteratively reweighted least squares (IRWLS) procedure is proposed to overcome weakness of the QP based method in terms of computation time. The IRWLS procedure based method enables us to derive a generalized cross validation (GCV) function that is easier and faster than the conventional cross validation function. The GCV function facilitates choosing the hyperparameters that affect the performance of the CSVQR and saving computation time. Numerical experiment results are presented to illustrate the performance of the proposed method 相似文献
16.
Knowledge based proximal support vector machines 总被引:1,自引:0,他引:1
We propose a proximal version of the knowledge based support vector machine formulation, termed as knowledge based proximal support vector machines (KBPSVMs) in the sequel, for binary data classification. The KBPSVM classifier incorporates prior knowledge in the form of multiple polyhedral sets, and determines two parallel planes that are kept as distant from each other as possible. The proposed algorithm is simple and fast as no quadratic programming solver needs to be employed. Effectively, only the solution of a structured system of linear equations is needed. 相似文献
17.
Accurately electric load forecasting has become the most important management goal, however, electric load often presents nonlinear data patterns. Therefore, a rigid forecasting approach with strong general nonlinear mapping capabilities is essential. Support vector regression (SVR) applies the structural risk minimization principle to minimize an upper bound of the generalization errors, rather than minimizing the training errors which are used by ANNs. The purpose of this paper is to present a SVR model with immune algorithm (IA) to forecast the electric loads, IA is applied to the parameter determine of SVR model. The empirical results indicate that the SVR model with IA (SVRIA) results in better forecasting performance than the other methods, namely SVMG, regression model, and ANN model. 相似文献
18.
The goal of classification (or pattern recognition) is to construct a classifier with small misclassification error. The notions
of consistency and universal consistency are important to the construction of classification rules. A consistent rule guarantees
us that taking more samples essentially suffices to roughly reconstruct the unknown distribution. Support vector machine (SVM)
algorithm is one of the most important rules in two category classification. How to effectively extend the SVM for multicategory
classification is still an on-going research issue. Different versions of multicategory support vector machines (MSVMs) have
been proposed and used in practice. We study the one designed by Lee, Lin and Wahba with hinge loss functional. The consistency
of MSVMs is established under a mild condition. As a corollary, the universal consistency holds true if the reproducing kernel
Hilbert space is dense in C norm. In addition, an example is given to demonstrate the main results.
Dedicated to Charlie Micchelli on the occasion of his 60th birthday
Supported in part by NSF of China under Grants 10571010 and 10171007. 相似文献
19.
Bruno Apolloni Lorenzo Valerio 《Nonlinear Analysis: Theory, Methods & Applications》2010,73(9):2855-2867
We propose a variant of two SVM regression algorithms expressly tailored in order to exploit additional information summarizing the relevance of each data item, as a measure of its relative importance w.r.t. the remaining examples. These variants, enclosing the original formulations when all data items have the same relevance, are preliminary tested on synthetic and real-world data sets. The obtained results outperform standard SVM approaches to regression if evaluated in light of the above mentioned additional information about data quality. 相似文献
20.
Support vector machines (SVMs) have attracted much attention in theoretical and in applied statistics. The main topics of recent interest are consistency, learning rates and robustness. We address the open problem whether SVMs are qualitatively robust. Our results show that SVMs are qualitatively robust for any fixed regularization parameter λ. However, under extremely mild conditions on the SVM, it turns out that SVMs are not qualitatively robust any more for any null sequence λn, which are the classical sequences needed to obtain universal consistency. This lack of qualitative robustness is of a rather theoretical nature because we show that, in any case, SVMs fulfill a finite sample qualitative robustness property.For a fixed regularization parameter, SVMs can be represented by a functional on the set of all probability measures. Qualitative robustness is proven by showing that this functional is continuous with respect to the topology generated by weak convergence of probability measures. Combined with the existence and uniqueness of SVMs, our results show that SVMs are the solutions of a well-posed mathematical problem in Hadamard’s sense. 相似文献