首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Back analysis is commonly used in identifying geomechanical parameters based on the monitored displacements. Conventional back analysis method is not capable of recognizing non-linear relationship involving displacements and mechanical parameters effectively. The new intelligent displacement back analysis method proposed in this paper is the combination of support vector machine, particle swarm optimization, and numerical analysis techniques. The non-linear relationship is efficiently represented by support vector machine. Numerical analysis is used to create training and testing samples for recognition of SVMs. Then, a global optimum search on the obtained SVMs by particle swarm optimization can lead to the geomechanical parameters identification effectively.  相似文献   

2.
In this paper, we deal with ranking problems arising from various data mining applications where the major task is to train a rank-prediction model to assign every instance a rank. We first discuss the merits and potential disadvantages of two existing popular approaches for ranking problems: the ‘Max-Wins’ voting process based on multi-class support vector machines (SVMs) and the model based on multi-criteria decision making. We then propose a confidence voting process for ranking problems based on SVMs, which can be viewed as a combination of the SVM approach and the multi-criteria decision making model. Promising numerical experiments based on the new model are reported. The research of the last author was supported by the grant #R.PG 0048923 of NESERC, the MITACS project “New Interior Point Methods and Software for Convex Conic-Linear Optimization and Their Application to Solve VLSI Circuit Layout Problems” and the Canada Researcher Chair Program.  相似文献   

3.
支持向量机(Support Vector Machines)是近年来热门的一种有监督学习的方法,它广泛的应用于统计分类以及回归分析中.通过SVM模型,考察分析一系列影响因素对高速公路路面质量指标的影响,并对提高高速公路路面质量提出建议.  相似文献   

4.
Support vector machines (SVMs) have attracted much attention in theoretical and in applied statistics. The main topics of recent interest are consistency, learning rates and robustness. We address the open problem whether SVMs are qualitatively robust. Our results show that SVMs are qualitatively robust for any fixed regularization parameter λ. However, under extremely mild conditions on the SVM, it turns out that SVMs are not qualitatively robust any more for any null sequence λn, which are the classical sequences needed to obtain universal consistency. This lack of qualitative robustness is of a rather theoretical nature because we show that, in any case, SVMs fulfill a finite sample qualitative robustness property.For a fixed regularization parameter, SVMs can be represented by a functional on the set of all probability measures. Qualitative robustness is proven by showing that this functional is continuous with respect to the topology generated by weak convergence of probability measures. Combined with the existence and uniqueness of SVMs, our results show that SVMs are the solutions of a well-posed mathematical problem in Hadamard’s sense.  相似文献   

5.
Face recognition based only on the visual spectrum is not accurate or robust enough to be used in uncontrolled environments. Recently, infrared (IR) imagery of human face is considered as a promising alternative to visible imagery due to its relative insensitive to illumination changes. However, IR has its own limitations. In order to fuse information from the two modalities to achieve better result, we propose a new fusion recognition scheme based on nonlinear decision fusion, using fuzzy integral to fuse the objective evidence supplied by each modality. The scheme also employs independent component analysis (ICA) for feature extraction and support vector machines (SVMs) for classification evidence. Recognition rate is used to evaluate the proposed scheme. Experimental results show the scheme improves recognition performance substantially.  相似文献   

6.
Support vector machines (SVMs) belong to the class of modern statistical machine learning techniques and can be described as M-estimators with a Hilbert norm regularization term for functions. SVMs are consistent and robust for classification and regression purposes if based on a Lipschitz continuous loss and a bounded continuous kernel with a dense reproducing kernel Hilbert space. For regression, one of the conditions used is that the output variable Y has a finite first absolute moment. This assumption, however, excludes heavy-tailed distributions. Recently, the applicability of SVMs was enlarged to these distributions by considering shifted loss functions. In this review paper, we briefly describe the approach of SVMs based on shifted loss functions and list some properties of such SVMs. Then, we prove that SVMs based on a bounded continuous kernel and on a convex and Lipschitz continuous, but not necessarily differentiable, shifted loss function have a bounded Bouligand influence function for all distributions, even for heavy-tailed distributions including extreme value distributions and Cauchy distributions. SVMs are thus robust in this sense. Our result covers the important loss functions ${\epsilon}$ -insensitive for regression and pinball for quantile regression, which were not covered by earlier results on the influence function. We demonstrate the usefulness of SVMs even for heavy-tailed distributions by applying SVMs to a simulated data set with Cauchy errors and to a data set of large fire insurance claims of Copenhagen Re.  相似文献   

7.
Support vector machines (SVMs) have been used successfully to deal with nonlinear regression and time series problems. However, SVMs have rarely been applied to forecasting reliability. This investigation elucidates the feasibility of SVMs to forecast reliability. In addition, genetic algorithms (GAs) are applied to select the parameters of an SVM model. Numerical examples taken from the previous literature are used to demonstrate the performance of reliability forecasting. The experimental results reveal that the SVM model with genetic algorithms (SVMG) results in better predictions than the other methods. Hence, the proposed model is a proper alternative for forecasting system reliability.  相似文献   

8.
Support vector machines (SVMs) have been successfully used to identify individuals’ preferences in conjoint analysis. One of the challenges of using SVMs in this context is to properly control for preference heterogeneity among individuals to construct robust partworths. In this work, we present a new technique that obtains all individual utility functions simultaneously in a single optimization problem based on three objectives: complexity reduction, model fit, and heterogeneity control. While complexity reduction and model fit are dealt using SVMs, heterogeneity is controlled by shrinking the individual-level partworths toward a population mean. The proposed approach is further extended to kernel-based machines, conferring flexibility to the model by allowing nonlinear utility functions. Experiments on simulated and real-world datasets show that the proposed approach in its linear form outperforms existing methods for choice-based conjoint analysis.  相似文献   

9.
In most papers establishing consistency for learning algorithms it is assumed that the observations used for training are realizations of an i.i.d. process. In this paper we go far beyond this classical framework by showing that support vector machines (SVMs) only require that the data-generating process satisfies a certain law of large numbers. We then consider the learnability of SVMs for α-mixing (not necessarily stationary) processes for both classification and regression, where for the latter we explicitly allow unbounded noise.  相似文献   

10.
Support vector machines (SVM) are becoming increasingly popular for the prediction of a binary dependent variable. SVMs perform very well with respect to competing techniques. Often, the solution of an SVM is obtained by switching to the dual. In this paper, we stick to the primal support vector machine problem, study its effective aspects, and propose varieties of convex loss functions such as the standard for SVM with the absolute hinge error as well as the quadratic hinge and the Huber hinge errors. We present an iterative majorization algorithm that minimizes each of the adaptations. In addition, we show that many of the features of an SVM are also obtained by an optimal scaling approach to regression. We illustrate this with an example from the literature and do a comparison of different methods on several empirical data sets.  相似文献   

11.
The sequential minimization optimization (SMO) is a simple and efficient decomposition algorithm for solving support vector machines (SVMs). In this paper, an improved working set selection and a simplified minimization step are proposed for the SMO-type decomposition method that reduces the learning time for SVM and increases the efficiency of SMO. Since the working set is selected directly according to the Karush–Kuhn–Tucker (KKT) conditions, the minimization step of subproblem is simplified, accordingly the learning time for SVM is reduced and the convergence is accelerated. Following Keerthi’s method, the convergence of the proposed algorithm is analyzed. It is proven that within a finite number of iterations, solution that is based on satisfaction of the KKT conditions will be obtained by using the improved algorithm.  相似文献   

12.
The credit scoring is a risk evaluation task considered as a critical decision for financial institutions in order to avoid wrong decision that may result in huge amount of losses. Classification models are one of the most widely used groups of data mining approaches that greatly help decision makers and managers to reduce their credit risk of granting credits to customers instead of intuitive experience or portfolio management. Accuracy is one of the most important criteria in order to choose a credit‐scoring model; and hence, the researches directed at improving upon the effectiveness of credit scoring models have never been stopped. In this article, a hybrid binary classification model, namely FMLP, is proposed for credit scoring, based on the basic concepts of fuzzy logic and artificial neural networks (ANNs). In the proposed model, instead of crisp weights and biases, used in traditional multilayer perceptrons (MLPs), fuzzy numbers are used in order to better model of the uncertainties and complexities in financial data sets. Empirical results of three well‐known benchmark credit data sets indicate that hybrid proposed model outperforms its component and also other those classification models such as support vector machines (SVMs), K‐nearest neighbor (KNN), quadratic discriminant analysis (QDA), and linear discriminant analysis (LDA). Therefore, it can be concluded that the proposed model can be an appropriate alternative tool for financial binary classification problems, especially in high uncertainty conditions. © 2013 Wiley Periodicals, Inc. Complexity 18: 46–57, 2013  相似文献   

13.
We extend the conventional Analytic Hierarchy Process (AHP) to an Euclidean vector space and develop formulations for aggregation of the alternative preferences with the criteria preferences. Relative priorities obtained from such a formulation are almost identical with the ones obtained using conventional AHP. Each decision is represented by a preference vector indicating the orientation of the decision maker's mind in the decision space spanned by the decision alternatives. This adds a geometric meaning to the decision making processes. We utilise the measure of similarity between any two decision makers and apply it for analysing decisions in a homogeneous group. We propose an aggregation scheme for calculating the group preference from individual preferences using a simple vector addition procedure that satisfies Pareto optimality condition. The results agree very well with the ones of conventional AHP.  相似文献   

14.
We propose using support vector machines (SVMs) to learn the efficient set in multiple objective discrete optimization (MODO). We conjecture that a surface generated by SVM could provide a good approximation of the efficient set. As one way of testing this idea, we embed the SVM-approximated efficient set information into a Genetic Algorithm (GA). This is accomplished by using a SVM-based fitness function that guides the GA search. We implement our SVM-guided GA on the multiple objective knapsack and assignment problems. We observe that using SVM improves the performance of the GA compared to a benchmark distance based fitness function and may provide competitive results.  相似文献   

15.
Support Vector Machines (SVMs) are now very popular as a powerful method in pattern classification problems. One of main features of SVMs is to produce a separating hyperplane which maximizes the margin in feature space induced by nonlinear mapping using kernel function. As a result, SVMs can treat not only linear separation but also nonlinear separation. While the soft margin method of SVMs considers only the distance between separating hyperplane and misclassified data, we propose in this paper multi-objective programming formulation considering surplus variables. A similar formulation was extensively researched in linear discriminant analysis mostly in 1980s by using Goal Programming(GP). This paper compares these conventional methods such as SVMs and GP with our proposed formulation through several examples.Received: September 2003, Revised: December 2003,  相似文献   

16.
In recent years, support vector machines (SVMs) were successfully applied to a wide range of applications. However, since the classifier is described as a complex mathematical function, it is rather incomprehensible for humans. This opacity property prevents them from being used in many real-life applications where both accuracy and comprehensibility are required, such as medical diagnosis and credit risk evaluation. To overcome this limitation, rules can be extracted from the trained SVM that are interpretable by humans and keep as much of the accuracy of the SVM as possible. In this paper, we will provide an overview of the recently proposed rule extraction techniques for SVMs and introduce two others taken from the artificial neural networks domain, being Trepan and G-REX. The described techniques are compared using publicly available datasets, such as Ripley’s synthetic dataset and the multi-class iris dataset. We will also look at medical diagnosis and credit scoring where comprehensibility is a key requirement and even a regulatory recommendation. Our experiments show that the SVM rule extraction techniques lose only a small percentage in performance compared to SVMs and therefore rank at the top of comprehensible classification techniques.  相似文献   

17.
The existing support vector machines (SVMs) are all assumed that all the features of training samples have equal contributions to construct the optimal separating hyperplane. However, for a certain real-world data set, some features of it may possess more relevances to the classification information, while others may have less relevances. In this paper, the linear feature-weighted support vector machine (LFWSVM) is proposed to deal with the problem. Two phases are employed to construct the proposed model. First, the mutual information (MI) based approach is used to assign appropriate weights for each feature of the whole given data set. Second, the proposed model is trained by the samples with their features weighted by the obtained feature weight vector. Meanwhile, the feature weights are embedded in the quadratic programming through detailed theoretical deduction to obtain the dual solution to the original optimization problem. Although the calculation of feature weights may add an extra computational cost, the proposed model generally exhibits better generalization performance over the traditional support vector machine (SVM) with linear kernel function. Experimental results upon one synthetic data set and several benchmark data sets confirm the benefits in using the proposed method. Moreover, it is also shown in experiments that the proposed MI based approach to determining feature weights is superior to the other two mostly used methods.  相似文献   

18.
The support vector machine (SVM) represents a new and very promising technique for machine learning tasks involving classification, regression or novelty detection. Improvements of its generalization ability can be achieved by incorporating prior knowledge of the task at hand.We propose a new hybrid algorithm consisting of signal-adapted wavelet decompositions and hard margin SVMs for waveform classification. The adaptation of the wavelet decompositions is tailored for hard margin SV classifiers with radial basis functions as kernels. It allows the optimization of the representation of the data before training the SVM and does not suffer from computationally expensive validation techniques.We assess the performance of our algorithm against the background of current concerns in medical diagnostics, namely the classification of endocardial electrograms and the detection of otoacoustic emissions. Here the performance of hard margin SVMs can significantly be improved by our adapted preprocessing step.  相似文献   

19.
Method  In this paper, we introduce a bi-level optimization formulation for the model and feature selection problems of support vector machines (SVMs). A bi-level optimization model is proposed to select the best model, where the standard convex quadratic optimization problem of the SVM training is cast as a subproblem. Feasibility  The optimal objective value of the quadratic problem of SVMs is minimized over a feasible range of the kernel parameters at the master level of the bi-level model. Since the optimal objective value of the subproblem is a continuous function of the kernel parameters, through implicity defined over a certain region, the solution of this bi-level problem always exists. The problem of feature selection can be handled in a similar manner. Experiments and results  Two approaches for solving the bi-level problem of model and feature selection are considered as well. Experimental results show that the bi-level formulation provides a plausible tool for model selection.  相似文献   

20.
电网项目融资租赁信用评价混合模型的新研究   总被引:1,自引:0,他引:1  
电网建设工程通过项目融资租赁进行快速融资的同时,给租赁公司带来巨大的信用风险.通过事前对承租人进行信用评价,能够有效降低信用风险损失.针对电网企业信用评价的多属性非线性特征,提出了基于独立分量分析技术-支持向量机的信用评价混合模型.首先,采用独立分量分析技术对信用属性数据进行属性重构,实现属性数据的去噪.然后,将重构后的新信用属性数据用于支持向量机的训练建模.最后,通过实例模拟对比分析了独立分量分析技术对支持向量机分类的有效性.结果表明,独立分量分析技术能够改善信用属性数据特征,并且在多属性分类问题中,独立分量分析技术有助于提高支持向量机分类的准确率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号