首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

3.
Back analysis is commonly used in identifying geomechanical parameters based on the monitored displacements. Conventional back analysis method is not capable of recognizing non-linear relationship involving displacements and mechanical parameters effectively. The new intelligent displacement back analysis method proposed in this paper is the combination of support vector machine, particle swarm optimization, and numerical analysis techniques. The non-linear relationship is efficiently represented by support vector machine. Numerical analysis is used to create training and testing samples for recognition of SVMs. Then, a global optimum search on the obtained SVMs by particle swarm optimization can lead to the geomechanical parameters identification effectively.  相似文献   

4.
In this paper we developed a general primal-dual nonlinear rescaling method with dynamic scaling parameter update (PDNRD) for convex optimization. We proved the global convergence, established 1.5-Q-superlinear rate of convergence under the standard second order optimality conditions. The PDNRD was numerically implemented and tested on a number of nonlinear problems from COPS and CUTE sets. We present numerical results, which strongly corroborate the theory. The research of the authors supported by the NSF Grant CCF-0324999  相似文献   

5.
Using duality, we reformulate the asymmetric variational inequality (VI) problem over a conic region as an optimization problem. We give sufficient conditions for the convexity of this reformulation. We thereby identify a class of VIs that includes monotone affine VIs over polyhedra, which may be solved by commercial optimization solvers.  相似文献   

6.
Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.  相似文献   

7.
The aim of this paper is to point out some sufficient constraint qualification conditions ensuring the boundedness of a set of Lagrange multipliers for vectorial optimization problems in infinite dimension. In some (smooth) cases these conditions turn out to be necessary for the existence of multipliers as well.  相似文献   

8.
Support vector machine (SVM) is a popular tool for machine learning task. It has been successfully applied in many fields, but the parameter optimization for SVM is an ongoing research issue. In this paper, to tune the parameters of SVM, one form of inter-cluster distance in the feature space is calculated for all the SVM classifiers of multi-class problems. Inter-cluster distance in the feature space shows the degree the classes are separated. A larger inter-cluster distance value implies a pair of more separated classes. For each classifier, the optimal kernel parameter which results in the largest inter-cluster distance is found. Then, a new continuous search interval of kernel parameter which covers the optimal kernel parameter of each class pair is determined. Self-adaptive differential evolution algorithm is used to search the optimal parameter combination in the continuous intervals of kernel parameter and penalty parameter. At last, the proposed method is applied to several real word datasets as well as fault diagnosis for rolling element bearings. The results show that it is both effective and computationally efficient for parameter optimization of multi-class SVM.  相似文献   

9.
We improve the twin support vector machine(TWSVM)to be a novel nonparallel hyperplanes classifier,termed as ITSVM(improved twin support vector machine),for binary classification.By introducing the diferent Lagrangian functions for the primal problems in the TWSVM,we get an improved dual formulation of TWSVM,then the resulted ITSVM algorithm overcomes the common drawbacks in the TWSVMs and inherits the essence of the standard SVMs.Firstly,ITSVM does not need to compute the large inverse matrices before training which is inevitable for the TWSVMs.Secondly,diferent from the TWSVMs,kernel trick can be applied directly to ITSVM for the nonlinear case,therefore nonlinear ITSVM is superior to nonlinear TWSVM theoretically.Thirdly,ITSVM can be solved efciently by the successive overrelaxation(SOR)technique or sequential minimization optimization(SMO)method,which makes it more suitable for large scale problems.We also prove that the standard SVM is the special case of ITSVM.Experimental results show the efciency of our method in both computation time and classification accuracy.  相似文献   

10.
《Applied Mathematical Modelling》2014,38(11-12):2800-2818
Electrical discharge machining (EDM) is inherently a stochastic process. Predicting the output of such a process with reasonable accuracy is rather difficult. Modern learning based methodologies, being capable of reading the underlying unseen effect of control factors on responses, appear to be effective in this regard. In the present work, support vector machine (SVM), one of the supervised learning methods, is applied for developing the model of EDM process. Gaussian radial basis function and ε-insensitive loss function are used as kernel function and loss function respectively. Separate models of material removal rate (MRR) and average surface roughness parameter (Ra) are developed by minimizing the mean absolute percentage error (MAPE) of training data obtained for different set of SVM parameter combinations. Particle swarm optimization (PSO) is employed for the purpose of optimizing SVM parameter combinations. Models thus developed are then tested with disjoint testing data sets. Optimum parameter settings for maximum MRR and minimum Ra are further investigated applying PSO on the developed models.  相似文献   

11.
Sensitivity analysis in convex vector optimization   总被引:5,自引:0,他引:5  
We consider a parametrized convex vector optimization problem with a parameter vectoru. LetY(u) be the objective space image of the parametrized feasible region. The perturbation mapW(u) is defined as the set of all minimal points of the setY(u) with respect to an ordering cone in the objective space. The purpose of this paper is to investigate the relationship between the contingent derivativeDW ofW and the contingent derivativeDY ofY. Sufficient conditions for MinDW=MinDY andDW=W minDY are obtained, respectively. Therefore, quantitative information on the behavior of the perturbation map is provided.The author would like to thank the anonymous referees for their helpful comments which improved the quality of this paper. The author would also like to thank Professor P. L. Yu for his encouragement.  相似文献   

12.
A convergent decomposition algorithm for support vector machines   总被引:1,自引:0,他引:1  
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.  相似文献   

13.
This paper describes the relationship between support vector regression (SVR) and rough (or interval) patterns. SVR is the prediction component of the support vector techniques. Rough patterns are based on the notion of rough values, which consist of upper and lower bounds, and are used to effectively represent a range of variable values. Predictions of rough values in a variety of different forms within the context of interval algebra and fuzzy theory are attracting research interest. An extension of SVR, called rough support vector regression   (RSVR), is proposed to improve the modeling of rough patterns. In particular, it is argued that the upper and lower bounds should be modeled separately. The proposal is shown to be a more flexible version of lower possibilistic regression model using ??-insensitivity. Experimental results on the Dow Jones Industrial Average demonstrate the suggested RSVR modeling technique.  相似文献   

14.
In this paper, we propose a robust L1-norm non-parallel proximal support vector machine (L1-NPSVM), which aims at giving a robust performance for binary classification in contrast to GEPSVM, especially for the problem with outliers. There are three mainly properties of the proposed L1-NPSVM. Firstly, different from the traditional GEPSVM which solves two generalized eigenvalue problems, our L1-NPSVM solves a pair of L1-norm optimal problems by using a simple justifiable iterative technique. Secondly, by introducing the L1-norm, our L1-NPSVM is more robust to outliers than GEPSVM to a great extent. Thirdly, compared with GEPSVM, no parameters need to be regularized in our L1-NPSVM. The effectiveness of the proposed method is demonstrated by tests on a simple artificial example as well as on some UCI datasets, which shows the improvements of GEPSVM.  相似文献   

15.
In this paper, we present a sufficient and necessary condition for the existence of the Lagrange multiplier for the general convex vector optimization problem. In the condition, we may allow that the constraint cone has an empty interior.  相似文献   

16.
In connection with mathematical programming in infinite-dimensional vector spaces, Zowe has studied the relationship between the Slater constraint qualification and a formally weaker qualification used by Kurcyusz. The attractive feature of the latter is that it involves only active constraints. Zowe has proved that, in barreled spaces, the two qualifications are equivalent and has asked whether the assumption of barreledness is superfluous. By studying cores and interiors of convex cones, we show that the two constraint qualifications are equivalent in a given topological vector spaceE iff every barrel inE is a neighborhood of the origin. Thus, whenE is locally convex, the two constraint qualifications are equivalent iffE is barreled. Other questions of Zowe are also answered.This research was supported in part by the Office of Naval Research, and in part by the Sonderforschungsbereich 21, Institut für Operations Research, Bonn, Federal Republic of Germany. The author is indebted to Professor J. Zowe for some helpful comments.  相似文献   

17.
On programming when the positive cone has an empty interior   总被引:1,自引:0,他引:1  
In this note, we present a condition which is equivalent to the existence of the Lagrange multiplier for the general convex programming problem. This condition enables one to study a hypothesis distinct from the one of nonempty interior of the positive cone of the space of restrictions that is commonly used. Simple examples of this condition are given. We also explore the relationship of this condition with the subdifferentiability of the primal functional.  相似文献   

18.
This paper attempts to extend the notion of duality for convex cones, by basing it on a prescribed conic ordering and a fixed bilinear mapping. This is an extension of the standard definition of dual cones, in the sense that the nonnegativity of the inner-product is replaced by a pre-specified conic ordering, defined by a convex cone , and the inner-product itself is replaced by a general multi-dimensional bilinear mapping. This new type of duality is termed the -induced duality in the paper. We further introduce the notion of -induced polar sets within the same framework, which can be viewed as a generalization of the -induced dual cones and is convenient to use for some practical applications. Properties of the extended duality, including the extended bi-polar theorem, are proven. Furthermore, attention is paid to the computation and approximation of the -induced dual objects. We discuss, as examples, applications of the newly introduced -induced duality concepts in robust conic optimization and the duality theory for multi-objective conic optimization. Research supported in part by the Foundation ‘Vereniging Trustfonds Erasmus Universiteit Rotterdam’ in The Netherlands, and in part by Hong Kong RGC Earmarked Grants CUHK4174/03E and CUHK418406.  相似文献   

19.
Uniform boundedness of output variables is a standard assumption in most theoretical analysis of regression algorithms. This standard assumption has recently been weaken to a moment hypothesis in least square regression (LSR) setting. Although there has been a large literature on error analysis for LSR under the moment hypothesis, very little is known about the statistical properties of support vector machines regression with unbounded sampling. In this paper, we fill the gap in the literature. Without any restriction on the boundedness of the output sampling, we establish an ad hoc convergence analysis for support vector machines regression under very mild conditions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号