首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an integrated approach for portfolio selection in a multicriteria decision making framework. Firstly, we use Support Vector Machines for classifying financial assets in three pre-defined classes, based on their performance on some key financial criteria. Next, we employ Real-Coded Genetic Algorithm to solve a mathematical model of the multicriteria portfolio selection problem in the respective classes incorporating investor-preferences.  相似文献   

2.
Support vector machines can be posed as quadratic programming problems in a variety of ways. This paper investigates a formulation using the two-norm for the misclassification error that leads to a positive definite quadratic program with a single equality constraint under a duality construction. The quadratic term is a small rank update to a diagonal matrix with positive entries. The optimality conditions of the quadratic program are reformulated as a semismooth system of equations using the Fischer-Burmeister function and a damped Newton method is applied to solve the resulting problem. The algorithm is shown to converge from any starting point with a Q-quadratic rate of convergence. At each iteration, the Sherman-Morrison-Woodbury update formula is used to solve the key linear system. Results for a large problem with 60 million observations are presented demonstrating the scalability of the proposed method on a personal computer. Significant computational savings are realized as the inactive variables are identified and exploited during the solution process. Further results on a small problem separated by a nonlinear surface are given showing the gains in performance that can be made from restarting the algorithm as the data evolves.Accepted: December 8, 2003This work partially supported by NSF grant number CCR-9972372; AFOSR grant number F49620-01-1-0040; the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38; and Microsoft Corporation.  相似文献   

3.
Knowledge based proximal support vector machines   总被引:1,自引:0,他引:1  
We propose a proximal version of the knowledge based support vector machine formulation, termed as knowledge based proximal support vector machines (KBPSVMs) in the sequel, for binary data classification. The KBPSVM classifier incorporates prior knowledge in the form of multiple polyhedral sets, and determines two parallel planes that are kept as distant from each other as possible. The proposed algorithm is simple and fast as no quadratic programming solver needs to be employed. Effectively, only the solution of a structured system of linear equations is needed.  相似文献   

4.
Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar’s notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.  相似文献   

5.
6.
Method  In this paper, we introduce a bi-level optimization formulation for the model and feature selection problems of support vector machines (SVMs). A bi-level optimization model is proposed to select the best model, where the standard convex quadratic optimization problem of the SVM training is cast as a subproblem. Feasibility  The optimal objective value of the quadratic problem of SVMs is minimized over a feasible range of the kernel parameters at the master level of the bi-level model. Since the optimal objective value of the subproblem is a continuous function of the kernel parameters, through implicity defined over a certain region, the solution of this bi-level problem always exists. The problem of feature selection can be handled in a similar manner. Experiments and results  Two approaches for solving the bi-level problem of model and feature selection are considered as well. Experimental results show that the bi-level formulation provides a plausible tool for model selection.  相似文献   

7.
We propose using support vector machines (SVMs) to learn the efficient set in multiple objective discrete optimization (MODO). We conjecture that a surface generated by SVM could provide a good approximation of the efficient set. As one way of testing this idea, we embed the SVM-approximated efficient set information into a Genetic Algorithm (GA). This is accomplished by using a SVM-based fitness function that guides the GA search. We implement our SVM-guided GA on the multiple objective knapsack and assignment problems. We observe that using SVM improves the performance of the GA compared to a benchmark distance based fitness function and may provide competitive results.  相似文献   

8.
We propose a variant of two SVM regression algorithms expressly tailored in order to exploit additional information summarizing the relevance of each data item, as a measure of its relative importance w.r.t. the remaining examples. These variants, enclosing the original formulations when all data items have the same relevance, are preliminary tested on synthetic and real-world data sets. The obtained results outperform standard SVM approaches to regression if evaluated in light of the above mentioned additional information about data quality.  相似文献   

9.
The goal of classification (or pattern recognition) is to construct a classifier with small misclassification error. The notions of consistency and universal consistency are important to the construction of classification rules. A consistent rule guarantees us that taking more samples essentially suffices to roughly reconstruct the unknown distribution. Support vector machine (SVM) algorithm is one of the most important rules in two category classification. How to effectively extend the SVM for multicategory classification is still an on-going research issue. Different versions of multicategory support vector machines (MSVMs) have been proposed and used in practice. We study the one designed by Lee, Lin and Wahba with hinge loss functional. The consistency of MSVMs is established under a mild condition. As a corollary, the universal consistency holds true if the reproducing kernel Hilbert space is dense in C norm. In addition, an example is given to demonstrate the main results. Dedicated to Charlie Micchelli on the occasion of his 60th birthday Supported in part by NSF of China under Grants 10571010 and 10171007.  相似文献   

10.
Support Vector Machines (SVMs) is known to be a powerful nonparametric classification technique even for high-dimensional data. Although predictive ability is important, obtaining an easy-to-interpret classifier is also crucial in many applications. Linear SVM provides a classifier based on a linear score. In the case of functional data, the coefficient function that defines such linear score usually has many irregular oscillations, making it difficult to interpret.  相似文献   

11.
Support vector machines (SVMs) have attracted much attention in theoretical and in applied statistics. The main topics of recent interest are consistency, learning rates and robustness. We address the open problem whether SVMs are qualitatively robust. Our results show that SVMs are qualitatively robust for any fixed regularization parameter λ. However, under extremely mild conditions on the SVM, it turns out that SVMs are not qualitatively robust any more for any null sequence λn, which are the classical sequences needed to obtain universal consistency. This lack of qualitative robustness is of a rather theoretical nature because we show that, in any case, SVMs fulfill a finite sample qualitative robustness property.For a fixed regularization parameter, SVMs can be represented by a functional on the set of all probability measures. Qualitative robustness is proven by showing that this functional is continuous with respect to the topology generated by weak convergence of probability measures. Combined with the existence and uniqueness of SVMs, our results show that SVMs are the solutions of a well-posed mathematical problem in Hadamard’s sense.  相似文献   

12.
Discrete support vector machines (DSVM), originally proposed for binary classification problems, have been shown to outperform other competing approaches on well-known benchmark datasets. Here we address their extension to multicategory classification, by developing three different methods. Two of them are based respectively on one-against-all and round-robin classification schemes, in which a number of binary discrimination problems are solved by means of a variant of DSVM. The third method directly addresses the multicategory classification task, by building a decision tree in which an optimal split to separate classes is derived at each node by a new extended formulation of DSVM. Computational tests on publicly available datasets are then conducted to compare the three multicategory classifiers based on DSVM with other methods, indicating that the proposed techniques achieve significantly higher accuracies. This research was partially supported by PRIN grant 2004132117.  相似文献   

13.
14.
Support Vector Machines (SVMs) are now very popular as a powerful method in pattern classification problems. One of main features of SVMs is to produce a separating hyperplane which maximizes the margin in feature space induced by nonlinear mapping using kernel function. As a result, SVMs can treat not only linear separation but also nonlinear separation. While the soft margin method of SVMs considers only the distance between separating hyperplane and misclassified data, we propose in this paper multi-objective programming formulation considering surplus variables. A similar formulation was extensively researched in linear discriminant analysis mostly in 1980s by using Goal Programming(GP). This paper compares these conventional methods such as SVMs and GP with our proposed formulation through several examples.Received: September 2003, Revised: December 2003,  相似文献   

15.
A convergent decomposition algorithm for support vector machines   总被引:1,自引:0,他引:1  
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.  相似文献   

16.
This paper presents the multiple instance classification problem that can be used for drug and molecular activity prediction, text categorization, image annotation, and object recognition. In order to model a more robust representation of outliers, hard margin loss formulations that minimize the number of misclassified instances are proposed. Although the problem is $\mathcal{NP}$ -hard, computational studies show that medium sized problems can be solved to optimality in reasonable time using integer programming and constraint programming formulations. A three-phase heuristic algorithm is proposed for larger problems. Furthermore, different loss functions such as hinge loss, ramp loss, and hard margin loss are empirically compared in the context of multiple instance classification. The proposed heuristic and robust support vector machines with hard margin loss demonstrate superior generalization performance compared to other approaches for multiple instance learning.  相似文献   

17.
为了提高临近支持向量机(PSVM)的数值表现,在PSVM的模型中引入了$\ell_0$-范数正则项,提出了稀疏临近支持向量机模型(SPSVM),从而提高分类器的特征选择能力。然而带有$\ell_0$-范数正则项的问题往往是NP-难问题,为了克服这一问题,采用非凸连续函数近似$\ell_0$-范数,并通过适当的DC分解将问题转化成DC规划问题进行求解,同时还讨论了算法的收敛性。数值实验结果表明不论是在仿真数据还是在实际数据中,所提出的方法是比较有效稳定的。  相似文献   

18.
We consider linear programming approaches for support vector machines (SVM). The linear programming problems are introduced as an approximation of the quadratic programming problems commonly used in SVM. When we consider the kernel based nonlinear discriminators, the approximation can be viewed as kernel principle component analysis which generates an important subspace from the feature space characterized the kernel function. We show that any data points nonlinearly, and implicitly, projected into the feature space by kernel functions can be approximately expressed as points lying a low dimensional Euclidean space explicitly, which enables us to develop linear programming formulations for nonlinear discriminators. We also introduce linear programming formulations for multicategory classification problems. We show that the same maximal margin principle exploited in SVM can be involved into the linear programming formulations. Moreover, considering the low dimensional feature subspace extraction, we can generate nonlinear multicategory discriminators by solving linear programming problems.Numerical experiments on real world datasets are presented. We show that the fairly low dimensional feature subspace can achieve a reasonable accuracy, and that the linear programming formulations calculate discriminators efficiently. We also discuss a sampling strategy which might be crucial for huge datasets.  相似文献   

19.
Advances in Data Analysis and Classification - Support vector machine (SVM) is a powerful tool in binary classification, known to attain excellent misclassification rates. On the other hand, many...  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号