首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigate constrained first order techniques for training support vector machines (SVM) for online classification tasks. The methods exploit the structure of the SVM training problem and combine ideas of incremental gradient technique, gradient acceleration and successive simple calculations of Lagrange multipliers. Both primal and dual formulations are studied and compared. Experiments show that the constrained incremental algorithms working in the dual space achieve the best trade-off between prediction accuracy and training time. We perform comparisons with an unconstrained large scale learning algorithm (Pegasos stochastic gradient) to emphasize that our choice can remain competitive for large scale learning due to the very special structure of the training problem.  相似文献   

2.
In this paper an algorithm for solving a linearly constrained nonlinear programming problem is developed. Given a feasible point, a correction vector is computed by solving a least distance programming problem over a polyhedral cone defined in terms of the gradients of the “almost” binding constraints. Mukai's approximate scheme for computing the step size is generalized to handle the constraints. This scheme provides an estimate for the step size based on a quadratic approximation of the function. This estimate is used in conjunction with Armijo line search to calculate a new point. It is shown that each accumulation point is a Kuhn-Tucker point to a slight perturbation of the original problem. Furthermore, under suitable second order optimality conditions, it is shown that eventually only one trial is needed to compute the step size.  相似文献   

3.
We consider the problem of scheduling a set of jobs with different release times on parallel machines so as to minimize the makespan of the schedule. The machines have the same processing speed, but each job is compatible with only a subset of those machines. The machines can be linearly ordered such that a higher-indexed machine can process all those jobs that a lower-indexed machine can process. We present an efficient algorithm for this problem with a worst-case performance ratio of 2. We also develop a polynomial time approximation scheme (PTAS) for the problem, as well as a fully polynomial time approximation scheme (FPTAS) for the case in which the number of machines is fixed.  相似文献   

4.
Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar’s notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.  相似文献   

5.
Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions. Corresponding dual problems are then derived for different loss functions. The theoretical results are applied by numerically solving a classification task using high dimensional real-world data in order to obtain optimal classifiers. The results demonstrate the excellent performance of support vector classification for this particular problem.  相似文献   

6.
基于模糊Adaboost算法的支持向量回归机   总被引:1,自引:0,他引:1  
针对单一支持向量回归机预测精度不十分良好的问题,结合Adaboost算法以及引入隶属函数,提出了一个基于模糊Aaboost算法的支持向量回归机模型。将该模型应用于金融时间序列预测问题的实验表明,预测精度有一定的提高,从而说明了该模型的有效性和可行性。  相似文献   

7.
线性支持向量顺序回归机的原始问题的解集分析   总被引:2,自引:0,他引:2  
本文主要对线性支持向量顺序回归机进行理论研究.对其相应原始问题解的存在性唯一性问题进行细致的分析,指明其解集的确切结构,并给出由对偶问题的解求出原始问题的解集的具体步骤.从而为建立理论上完备的线性支持向量顺序回归机提供了依据.  相似文献   

8.
强向量均衡问题与不动点问题的粘性逼近算法   总被引:2,自引:0,他引:2  
讨论了强向量均衡问题与非扩张映射不动点问题的公共解.首先,给出了强向量均衡问题的辅助问题,并在适当的条件下,证明了其解的存在性和唯一性结果.然后,利用这些结果,提出了强向量均衡问题与非扩张映射不动点问题公共解的粘性逼近算法,并进一步证明了,在适当的条件下,由该算法产生的迭代序列强收敛于强向量均衡问题和非扩张映射不动点问题的公共解.  相似文献   

9.
针对具有多块可分结构的非凸优化问题提出了一类新的随机Bregman交替方向乘子法,在周期更新规则下, 证明了该算法的渐进收敛性; 在随机更新的规则下, 几乎确定的渐进收敛性得以证明。数值实验结果表明, 该算法可有效训练具有离散结构的支持向量机。  相似文献   

10.
Many real applications can be formulated as nonlinear minimization problems with a single linear equality constraint and box constraints. We are interested in solving problems where the number of variables is so huge that basic operations, such as the evaluation of the objective function or the updating of its gradient, are very time consuming. Thus, for the considered class of problems (including dense quadratic programs), traditional optimization methods cannot be applied directly. In this paper, we define a decomposition algorithm model which employs, at each iteration, a descent search direction selected among a suitable set of sparse feasible directions. The algorithm is characterized by an acceptance rule of the updated point which on the one hand permits to choose the variables to be modified with a certain degree of freedom and on the other hand does not require the exact solution of any subproblem. The global convergence of the algorithm model is proved by assuming that the objective function is continuously differentiable and that the points of the level set have at least one component strictly between the lower and upper bounds. Numerical results on large-scale quadratic problems arising in the training of support vector machines show the effectiveness of an implemented decomposition scheme derived from the general algorithm model.  相似文献   

11.
利用传统支持向量机(SVM)对不平衡数据进行分类时,由于真实的少数类支持向量样本过少且难以被识别,造成了分类时效果不是很理想.针对这一问题,提出了一种基于支持向量机混合采样的不平衡数据分类方法(BSMS).该方法首先对经过支持向量机分类的原始不平衡数据按照所处位置的不同划分为支持向量区(SV),多数类非支持向量区(MN...  相似文献   

12.
In this paper, we consider the linearly constrained multiobjective minimization, and we propose a new reduced gradient method for solving this problem. Our approach solves iteratively a convex quadratic optimization subproblem to calculate a suitable descent direction for all the objective functions, and then use a bisection algorithm to find an optimal stepsize along this direction. We prove, under natural assumptions, that the proposed algorithm is well-defined and converges globally to Pareto critical points of the problem. Finally, this algorithm is implemented in the MATLAB environment and comparative results of numerical experiments are reported.  相似文献   

13.
We introduce the notion of predicted decrease approximation (PDA) for constrained convex optimization, a flexible framework which includes as special cases known algorithms such as generalized conditional gradient, proximal gradient, greedy coordinate descent for separable constraints and working set methods for linear equality constraints with bounds. The new scheme allows the development of a unified convergence analysis for these methods. We further consider a partially strongly convex nonsmooth model and show that dual application of PDA-based methods yields new sublinear convergence rate estimates in terms of both primal and dual objectives. As an example of an application, we provide an explicit working set selection rule for SMO-type methods for training the support vector machine with an improved primal convergence analysis.  相似文献   

14.
给出了基于全部风险(ORM)最小化基础上的半监督支持向量机分类算法,该算法通过加入工作集进行训练,提高了标准SVM对训练集提供信息不充分的数据集的分类泛化能力,而且能有效地处理大量的无标示数据.并将凹半监督支持向量机算法应用于县域可持续发展综合实力评价中.通过邯郸15个县作实证分析,论证了该算法的可行性和有效性.  相似文献   

15.
In this paper we propose some improvements to a recent decomposition technique for the large quadratic program arising in training support vector machines. As standard decomposition approaches, the technique we consider is based on the idea to optimize, at each iteration, a subset of the variables through the solution of a quadratic programming subproblem. The innovative features of this approach consist in using a very effective gradient projection method for the inner subproblems and a special rule for selecting the variables to be optimized at each step. These features allow to obtain promising performance by decomposing the problem into few large subproblems instead of many small subproblems as usually done by other decomposition schemes. We improve this technique by introducing a new inner solver and a simple strategy for reducing the computational cost of each iteration. We evaluate the effectiveness of these improvements by solving large-scale benchmark problems and by comparison with a widely used decomposition package.  相似文献   

16.
We present a unified approach to establishing the existence of global minima of a (non)convex constrained optimization problem. Our results unify and generalize previous existence results for convex and nonconvex programs, including the Frank-Wolfe theorem, and for (quasi) convex quadratically constrained quadratic programs and convex polynomial programs. For example, instead of requiring the objective/constraint functions to be constant along certain recession directions, we only require them to linearly recede along these directions. Instead of requiring the objective/constraint functions to be convex polynomials, we only require the objective function to be a (quasi)convex polynomial over a polyhedral set and the constraint functions to be convex polynomials or the composition of coercive functions with linear mappings.We thank Professor Dimitri Bertsekas for his comments and support in the writing of this paper.  相似文献   

17.
We consider the problem of multiple fitting of linearly parametrized curves, that arises in many computer vision problems such as road scene analysis. Data extracted from images usually contain non-Gaussian noise and outliers, which makes classical estimation methods ineffective. In this paper, we first introduce a family of robust probability density functions which appears to be well-suited to many real-world problems. Also, such noise models are suitable for defining continuation heuristics to escape shallow local minima and their robustness is devised in terms of breakdown point. Second, the usual Iterative Reweighted Least Squares (IRLS) robust estimator is extended to the problem of robustly estimating sets of linearly parametrized curves. The resulting, non-convex optimization problem is tackled within a Lagrangian approach, leading to the so-called Simultaneous Robust Multiple Fitting (SRMF) algorithm, whose global convergence to a local minimum is proved using results from constrained optimization theory.  相似文献   

18.
We study the scheduling problem with a common due date on two parallel identical machines and the total early work criterion. The problem is known to be NP-hard. We prove a few dominance properties of optimal solutions of this problem. Their proposal was inspired by the results of some auxiliary computational experiments. Test were performed with the dynamic programming algorithm and list algorithms. Then, we propose the polynomial time approximation scheme, based on structuring problem input. Moreover, we discuss the relationship between the early work criterion and the related late work criterion. We compare the computational complexity and approximability of scheduling problems with both mentioned objective functions.  相似文献   

19.
In this paper, we consider convergence properties of a class of penalization methods for a general vector optimization problem with cone constraints in infinite dimensional spaces. Under certain assumptions, we show that any efficient point of the cone constrained vector optimization problem can be approached by a sequence of efficient points of the penalty problems. We also show, on the other hand, that any limit point of a sequence of approximate efficient solutions to the penalty problems is a weekly efficient solution of the original cone constrained vector optimization problem. Finally, when the constrained space is of finite dimension, we show that any limit point of a sequence of stationary points of the penalty problems is a KKT stationary point of the original cone constrained vector optimization problem if Mangasarian–Fromovitz constraint qualification holds at the limit point.This work is supported by the Postdoctoral Fellowship of Hong Kong Polytechnic University.  相似文献   

20.
Support vector machines (SVM) are becoming increasingly popular for the prediction of a binary dependent variable. SVMs perform very well with respect to competing techniques. Often, the solution of an SVM is obtained by switching to the dual. In this paper, we stick to the primal support vector machine problem, study its effective aspects, and propose varieties of convex loss functions such as the standard for SVM with the absolute hinge error as well as the quadratic hinge and the Huber hinge errors. We present an iterative majorization algorithm that minimizes each of the adaptations. In addition, we show that many of the features of an SVM are also obtained by an optimal scaling approach to regression. We illustrate this with an example from the literature and do a comparison of different methods on several empirical data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号