首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We improve the twin support vector machine(TWSVM)to be a novel nonparallel hyperplanes classifier,termed as ITSVM(improved twin support vector machine),for binary classification.By introducing the diferent Lagrangian functions for the primal problems in the TWSVM,we get an improved dual formulation of TWSVM,then the resulted ITSVM algorithm overcomes the common drawbacks in the TWSVMs and inherits the essence of the standard SVMs.Firstly,ITSVM does not need to compute the large inverse matrices before training which is inevitable for the TWSVMs.Secondly,diferent from the TWSVMs,kernel trick can be applied directly to ITSVM for the nonlinear case,therefore nonlinear ITSVM is superior to nonlinear TWSVM theoretically.Thirdly,ITSVM can be solved efciently by the successive overrelaxation(SOR)technique or sequential minimization optimization(SMO)method,which makes it more suitable for large scale problems.We also prove that the standard SVM is the special case of ITSVM.Experimental results show the efciency of our method in both computation time and classification accuracy.  相似文献   

2.
Kernel Fisher discriminant analysis (KFDA) is a popular classification technique which requires the user to predefine an appropriate kernel. Since the performance of KFDA depends on the choice of the kernel, the problem of kernel selection becomes very important. In this paper we treat the kernel selection problem as an optimization problem over the convex set of finitely many basic kernels, and formulate it as a second order cone programming (SOCP) problem. This formulation seems to be promising because the resulting SOCP can be efficiently solved by employing interior point methods. The efficacy of the optimal kernel, selected from a given convex set of basic kernels, is demonstrated on UCI machine learning benchmark datasets.  相似文献   

3.
In this paper, we propose a kernel-free semi-supervised quadratic surface support vector machine model for binary classification. The model is formulated as a mixed-integer programming problem, which is equivalent to a non-convex optimization problem with absolute-value constraints. Using the relaxation techniques, we derive a semi-definite programming problem for semi-supervised learning. By solving this problem, the proposed model is tested on some artificial and public benchmark data sets. Preliminary computational results indicate that the proposed method outperforms some existing well-known methods for solving semi-supervised support vector machine with a Gaussian kernel in terms of classification accuracy.  相似文献   

4.
During the last years, kernel based methods proved to be very successful for many real-world learning problems. One of the main reasons for this success is the efficiency on large data sets which is a result of the fact that kernel methods like support vector machines (SVM) are based on a convex optimization problem. Solving a new learning problem can now often be reduced to the choice of an appropriate kernel function and kernel parameters. However, it can be shown that even the most powerful kernel methods can still fail on quite simple data sets in cases where the inherent feature space induced by the used kernel function is not sufficient. In these cases, an explicit feature space transformation or detection of latent variables proved to be more successful. Since such an explicit feature construction is often not feasible for large data sets, the ultimate goal for efficient kernel learning would be the adaptive creation of new and appropriate kernel functions. It can, however, not be guaranteed that such a kernel function still leads to a convex optimization problem for Support Vector Machines. Therefore, we have to enhance the optimization core of the learning method itself before we can use it with arbitrary, i.e., non-positive semidefinite, kernel functions. This article motivates the usage of appropriate feature spaces and discusses the possible consequences leading to non-convex optimization problems. We will show that these new non-convex optimization SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of the generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions.  相似文献   

5.
In this paper, we investigate the implications for portfolio theory of using conditional expectation estimators. First, we focus on the approximation of the conditional expectation within large-scale portfolio selection problems. In this context, we propose a new consistent multivariate kernel estimator to approximate the conditional expectation and it optimizes the bandwidth selection of kernel-type estimators. Second, we deal with the portfolio selection problem from the point of view of different non-satiable investors, namely risk-averse and risk-seeker investors. In particular, using a well-known ordering classification, we first identify different definitions of returns based on the investors preferences. Finally, for each problem, we examine several admissible portfolio optimization problems applied to the US stock market. The proposed empirical analysis allows us to evaluate the impact of the conditional expectation estimators in portfolio theory.  相似文献   

6.
Method  In this paper, we introduce a bi-level optimization formulation for the model and feature selection problems of support vector machines (SVMs). A bi-level optimization model is proposed to select the best model, where the standard convex quadratic optimization problem of the SVM training is cast as a subproblem. Feasibility  The optimal objective value of the quadratic problem of SVMs is minimized over a feasible range of the kernel parameters at the master level of the bi-level model. Since the optimal objective value of the subproblem is a continuous function of the kernel parameters, through implicity defined over a certain region, the solution of this bi-level problem always exists. The problem of feature selection can be handled in a similar manner. Experiments and results  Two approaches for solving the bi-level problem of model and feature selection are considered as well. Experimental results show that the bi-level formulation provides a plausible tool for model selection.  相似文献   

7.
在一个Hilbert空间中通过内积核定义的线性算子对应一个自然的再生核Hilbert空间结构.本文将称其为H-HK结构.这个结构本身内蕴一个基方法,可以解答线性算子的若干最基本的问题,包括确定或刻画其值域空间、解算子方程及解Moore-Penrose伪-(广义-)逆算子问题.在对已存在结果的简要综述之后,本文的目的是建立H-HK结构下的预正交自适应Fourier分解(pre-orthogonal adaptive Fourier decomposition,POAFD)算法.在这个方法之下导出上述3个问题的解的稀疏表示.在逐次跟踪匹配的优化方法论中POAFD的优选原理保证了它在理论上和实用上的最优性.它也具有算法上的可行性.所提供的方法可有效地应用于具体实际问题,包括信号与图像重构、常微分方程、偏微分方程和优化问题的数值解等.  相似文献   

8.
We address the problem of blind separation of instantaneous and convolutive mixtures of independent signals. We formulate this problem as an optimization problem using the mutual information criterion. In the case of instantaneous mixtures, we solve the resulting optimization problem with an extended Newton's method on the Stiefel manifold. This approach works for general independent signals and enjoys quadratic convergence rate. For convolutive mixtures, we first decorrelate the signal mixtures by performing a spectral matrix factorization and then minimize the mutual information of decorrelated signal mixtures. This approach not only separates the sources but also equalizes the extracted signals. For both instantaneous and convolutive cases, we approximate the marginal density function of the output signals by a kernel estimator. The convergence of the kernel estimator is also analyzed. The simulation results demonstrate the competitiveness of our proposed methods. This research is supported in part by the Natural Sciences and Engineering Research Council of Canada, Grant No. OPG0090391 and by the Canada Research Chair Program.Mathematics Subject Classification (2000):20E28, 20G40, 20C20  相似文献   

9.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

10.
Support vector machines are originally designed for binary classification. How to effectively extend it for multi-class classification is still an on-going research issue. In this paper, we consider kernel machines which are natural extensions of multi-category support vector machines originally proposed by Crammer and Singer. Based on the algorithm stability, we obtain the generalization error bounds for the kernel machines proposed in the paper.  相似文献   

11.
由Nesterov和Nemirovski[4]创立的self-concordant障碍函数理论为解线性和凸优化问题提供了多项式时间内点算法.根据self-concordant障碍函数的参数,就可以分析内点算法的复杂性.在这篇文章中,我们介绍了基于核函数的局部self-concordant障碍函数,它在线性优化问题的中心路径及其邻域内满足self-concordant性质.通过求解此障碍函数的局部参数值,我们得到了求解线性规划问题的基于此局部Self-concordant障碍函数的纯牛顿步内点算法的理论迭代界.此迭代界与目前已知的最好的理论迭代界是一致的.  相似文献   

12.
Abstract

We define a new interior-point method (IPM), which is suitable for solving symmetric optimization (SO) problems. The proposed algorithm is based on a new search direction. In order to obtain this direction, we apply the method of algebraically equivalent transformation on the centering equation of the central path. We prove that the associated barrier cannot be derived from a usual kernel function. Therefore, we introduce a new notion, namely the concept of the positive-asymptotic kernel function. We conclude that this algorithm solves the problem in polynomial time and has the same complexity as the best known IPMs for SO.  相似文献   

13.
Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions. Corresponding dual problems are then derived for different loss functions. The theoretical results are applied by numerically solving a classification task using high dimensional real-world data in order to obtain optimal classifiers. The results demonstrate the excellent performance of support vector classification for this particular problem.  相似文献   

14.
在原始对偶内点算法的设计和分析中,障碍函数对算法的搜索方法和复杂性起着重要的作用。本文由核函数来确定障碍函数,设计了一个求解半正定规划问题的原始。对偶内点算法。这个障碍函数即可以定义算法新的搜索方向,又度量迭代点与中心路径的距离,同时对算法的复杂性分析起着关键的作用。我们计算了算法的迭代界,得出了关于大步校正法和小步校正法的迭代界,它们分别是O(√n log n log n/c)和O(√n log n/ε),这里n是半正定规划问题的维数。最后,我们根据一个算例,说明了算法的有效性以及对核函数的参数的敏感性。  相似文献   

15.
The presence of less relevant or highly correlated features often decrease classification accuracy. Feature selection in which most informative variables are selected for model generation is an important step in data-driven modeling. In feature selection, one often tries to satisfy multiple criteria such as feature discriminating power, model performance or subset cardinality. Therefore, a multi-objective formulation of the feature selection problem is more appropriate. In this paper, we propose to use fuzzy criteria in feature selection by using a fuzzy decision making framework. This formulation allows for a more flexible definition of the goals in feature selection, and avoids the problem of weighting different goals is classical multi-objective optimization. The optimization problem is solved using an ant colony optimization algorithm proposed in our previous work. We illustrate the added value of the approach by applying our proposed fuzzy feature selection algorithm to eight benchmark problems.  相似文献   

16.
Sliced inverse regression (SIR) is an important method for reducing the dimensionality of input variables. Its goal is to estimate the effective dimension reduction directions. In classification settings, SIR is closely related to Fisher discriminant analysis. Motivated by reproducing kernel theory, we propose a notion of nonlinear effective dimension reduction and develop a nonlinear extension of SIR called kernel SIR (KSIR). Both SIR and KSIR are based on principal component analysis. Alternatively, based on principal coordinate analysis, we propose the dual versions of SIR and KSIR, which we refer to as sliced coordinate analysis (SCA) and kernel sliced coordinate analysis (KSCA), respectively. In the classification setting, we also call them discriminant coordinate analysis and kernel discriminant coordinate analysis. The computational complexities of SIR and KSIR rely on the dimensionality of the input vector and the number of input vectors, respectively, while those of SCA and KSCA both rely on the number of slices in the output. Thus, SCA and KSCA are very efficient dimension reduction methods.  相似文献   

17.
We view regularized learning of a function in a Banach space from its finite samples as an optimization problem. Within the framework of reproducing kernel Banach spaces, we prove the representer theorem for the minimizer of regularized learning schemes with a general loss function and a nondecreasing regularizer. When the loss function and the regularizer are differentiable, a characterization equation for the minimizer is also established.  相似文献   

18.
自V apn ik于20世纪90年代末提出推理型支持向量机的概念后,关于推理型支持向量机的研究基本处于停止状态,主要问题是这种支持向量机的优化模型求解有相当的困难.文章试图把它的优化问题变为无约束问题,再构造带有核的光滑无约束最优化问题,由此构建最优化问题易于求解的推理型支持向量机,以突破对它深入研究的瓶颈.  相似文献   

19.
针对音频信号准确性分类的问题,提出一种基于改进的的粒子群优化算法(PSO)的支持向量机(SVM)音频信号分类的方法,简称IPSO-SVM.首先用Mel倒谱系数法对4种音频信号进行特征提取.其次在PSO中引入自适应变异因子,能够成功地跳出局部极小值点;然后对PSO中的惯性权重进行了改进,将惯性权重由常数变为指数型递减函数.随着迭代的进行,使权重逐渐减小,这样做有利于粒子进行局部寻优.最后用改进的PSO不断优化SVM中的惩罚因子c和核函数参数g来提高预测精度.实验结果表明,与传统的SVM、PSO-SVM、GA-SVM相比,我们提出的IPSO-SVM算法分类结果更精确.  相似文献   

20.
The support vector machine (SVM) is known for its good performance in two-class classification, but its extension to multiclass classification is still an ongoing research issue. In this article, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in two-class classification, but also can naturally be generalized to the multiclass case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the support points of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a potential computational advantage over the SVM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号