共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
R Stoean M Preuss C Stoean E El-Darzi D Dumitrescu 《The Journal of the Operational Research Society》2009,60(8):1116-1122
The paper presents a novel evolutionary technique constructed as an alternative of the standard support vector machines architecture. The approach adopts the learning strategy of the latter but aims to simplify and generalize its training, by offering a transparent substitute to the initial black-box. Contrary to the canonical technique, the evolutionary approach can at all times explicitly acquire the coefficients of the decision function, without any further constraints. Moreover, in order to converge, the evolutionary method does not require the positive (semi-)definition properties for kernels within nonlinear learning. Several potential structures, enhancements and additions are proposed, tested and confirmed using available benchmarking test problems. Computational results show the validity of the new approach in terms of runtime, prediction accuracy and flexibility. 相似文献
3.
本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题. 这里的学习准则与之前再生核Hilbert空间的准则有着本质差异: 核除了满足连续性和有界性之外, 不需要再满足对称性和正定性; 正则化子是函数关于样本展开系数的l2-范数; 样本输出是无界的. 上述差异给误差分析增加了额外难度. 本文的目的是在样本输出不满足一致有界的情形下, 通过l2-经验覆盖数给出误差的集中估计(concentration estimates). 通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧, 得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率. 相似文献
4.
Support vector machine learning algorithm and transduction 总被引:1,自引:0,他引:1
A. Gammermann 《Computational Statistics》2000,15(1):31-39
5.
In this paper we construct the linear support vector machine (SVM) based on the nonlinear rescaling (NR) methodology (see
[Polyak in Math Program 54:177–222, 1992; Polyak in Math Program Ser A 92:197–235, 2002; Polyak and Teboulle in Math Program
76:265–284, 1997] and references therein). The formulation of the linear SVM based on the NR method leads to an algorithm
which reduces the number of support vectors without compromising the classification performance compared to the linear soft-margin
SVM formulation. The NR algorithm computes both the primal and the dual approximation at each step. The dual variables associated
with the given data-set provide important information about each data point and play the key role in selecting the set of
support vectors. Experimental results on ten benchmark classification problems show that the NR formulation is feasible. The
quality of discrimination, in most instances, is comparable to the linear soft-margin SVM while the number of support vectors
in several instances were substantially reduced. 相似文献
6.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method. 相似文献
7.
Learning with coefficient-based regularization has attracted a considerable amount of attention in recent years, on both theoretical analysis and applications. In this paper, we study coefficient-based learning scheme (CBLS) for regression problem with l q -regularizer (1 < q ? 2). Our analysis is conducted under more general conditions, and particularly the kernel function is not necessarily positive definite. This paper applies concentration inequality with l 2-empirical covering numbers to present an elaborate capacity dependence analysis for CBLS, which yields sharper estimates than existing bounds. Moreover, we estimate the regularization error to support our assumptions in error analysis, also provide an illustrative example to further verify the theoretical results. 相似文献
8.
Machine learning is a very interesting and important branch of artificial intelligence. Among many learning models, the support vector machine is a popular model with high classification ability which can be trained by mathematical programming methods. Since the model was originally formulated for binary classification, various kinds of extensions have been investigated for multi-class classification. In this paper, we review some existing models, and introduce new models which we recently proposed. The models are derived from the viewpoint of multi-objective maximization of geometric margins for a discriminant function, and each model can be trained by solving a second-order cone programming problem. We show that discriminant functions with high generalization ability can be obtained by these models through some numerical experiments. 相似文献
9.
10.
Theodore B. Trafalis Olutayo O. Oladunni Michael B. Richman 《Computational Management Science》2011,8(3):281-297
A knowledge-based linear Tihkonov regularization classification model for tornado discrimination is presented. Twenty-three
attributes, based on the National Severe Storms Laboratory’s Mesoscale Detection Algorithm, are used as prior knowledge. Threshold
values for these attributes are employed to discriminate the data into two classes (tornado, non-tornado). The Weather Surveillance
Radar 1998 Doppler is used as a source of data streaming every 6 min. The combination of data and prior knowledge is used
in the development of a least squares problem that can be solved using matrix or iterative methods. Advantages of this formulation
include explicit expressions for the classification weights of the classifier and its ability to incorporate and handle prior
knowledge directly to the classifiers. Comparison of the present approach to that of Fung et al. [in Proceedings neural information
processing systems (NIPS 2002), Vancouver, BC, December 10–12, 2002], over a suite of forecast evaluation indices, demonstrates
that the Tikhonov regularization model is superior for discriminating tornadic from non-tornadic storms. 相似文献
11.
Bing Zheng Li 《数学学报(英文版)》2008,24(3):511-528
The purpose of this paper is to provide an error analysis for the multicategory support vector machine (MSVM) classificaton problems. We establish the uniform convergency approach for MSVMs and estimate the misclassification error. The main difficulty we overcome here is to bound the offset vector. As a result, we confirm that the MSVM classification algorithm with polynomial kernels is always efficient when the degree of the kernel polynomial is large enough. Finally the rate of convergence and examples are given to demonstrate the main results. 相似文献
12.
非平行支持向量机是支持向量机的延伸,受到了广泛的关注.非平行支持向量机构造允许非平行的支撑超平面,可以描述不同类别之间的数据分布差异,从而适用于更广泛的问题.然而,对非平行支持向量机模型与支持向量机模型之间的关系研究较少,且尚未有等价于标准支持向量机模型的非平行支持向量机模型.从支持向量机出发,构造出新的非平行支持向量机模型,该模型不仅可以退化为标准支持向量机,保留了支持向量机的稀疏性和核函数可扩展性.同时,可以描述不同类别之间的数据分布差异,适用于更广泛的非平行结构数据等.最后,通过实验初步验证了所提模型的有效性. 相似文献
13.
14.
M. S. Matvejchuk 《Russian Mathematics (Iz VUZ)》2008,52(9):41-50
We describe a correlation function generated by a J-orthogonal indefinite measure with values in a Krein space. 相似文献
15.
We construct a canonical correspondence from a wide class of reproducing kernels on infinite-dimensional Hermitian vector bundles to linear connections on these bundles. The linear connection in question is obtained through a pull-back operation involving the tautological universal bundle and the classifying morphism of the input kernel. The aforementioned correspondence turns out to be a canonical functor between categories of kernels and linear connections. A number of examples of linear connections including the ones associated to classical kernels, homogeneous reproducing kernels and kernels occurring in the dilation theory for completely positive maps are given, together with their covariant derivatives. 相似文献
16.
Uniform boundedness of output variables is a standard assumption in most theoretical analysis of regression algorithms. This standard assumption has recently been weaken to a moment hypothesis in least square regression (LSR) setting. Although there has been a large literature on error analysis for LSR under the moment hypothesis, very little is known about the statistical properties of support vector machines regression with unbounded sampling. In this paper, we fill the gap in the literature. Without any restriction on the boundedness of the output sampling, we establish an ad hoc convergence analysis for support vector machines regression under very mild conditions. 相似文献
17.
Quantile regression has received a great deal of attention as an important tool for modeling statistical quantities of interest other than the conditional mean. Varying coefficient models are widely used to explore dynamic patterns among popular models available to avoid the curse of dimensionality. We propose a support vector quantile regression model with varying coefficients and its two estimation methods. One uses the quadratic programming, and the other uses the iteratively reweighted least squares procedure. The proposed method can be applied easily and effectively to estimating the nonlinear regression quantiles depending on the high-dimensional vector of smoothing variables. We also present the model selection method that employs generalized cross validation and generalized approximate cross validation techniques for choosing the hyperparameters, which affect the performance of the proposed model. Numerical studies are conducted to illustrate the performance of the proposed model. 相似文献
18.
This paper investigates the approximation of multivariate functions from data via linear combinations of translates of a positive
definite kernel from a reproducing kernel Hilbert space. If standard interpolation conditions are relaxed by Chebyshev-type
constraints, one can minimize the norm of the approximant in the Hilbert space under these constraints. By standard arguments
of optimization theory, the solutions will take a simple form, based on the data related to the active constraints, called
support vectors in the context of machine learning. The corresponding quadratic programming problems are investigated to some
extent. Using monotonicity results concerning the Hilbert space norm, iterative techniques based on small quadratic subproblems
on active sets are shown to be finite, even if they drop part of their previous information and even if they are used for
infinite data, e.g., in the context of online learning. Numerical experiments confirm the theoretical results.
Dedicated to C.A. Micchelli at the occasion of his 60th birthday
Mathematics subject classifications (2000) 65D05, 65D10, 41A15, 41A17, 41A27, 41A30, 41A40, 41A63. 相似文献
19.
20.