首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Support vector machine (SVM), as a novel type of learning machine, for the first time, was used to develop a predictive model for early diagnosis of anorexia. It was based on the concentration of six elements (Zn, Fe, Mg, Cu, Ca, and Mn) and the age extracted from 90 cases. Compared with the results obtained from two other classifiers, partial least squares (PLS) and back-propagation neural network (BPNN), the SVM method exhibited the best whole performance. The accuracies for the test set by PLS, BPNN, and SVM methods were 52%, 65%, and 87%, respectively. Moreover, the models we proposed could also provide some insight into what factors were related to anorexia.  相似文献   

2.
Support vector machine (SVM) algorithms are a popular class of techniques to perform classification. However, outliers in the data can result in bad global misclassification percentages. In this paper, we propose a method to identify such outliers in the SVM framework. A specific robust classification algorithm is proposed adjusting the least squares SVM (LS‐SVM). This yields better classification performance for heavily tailed data and data containing outliers. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Li S  Yao X  Liu H  Li J  Fan B 《Analytica chimica acta》2007,584(1):37-42
T-lymphocyte (T-cell) is a very important component in human immune system. It possesses a receptor (TCR) that is specific for the foreign epitopes which are in a form of short peptides bound to the major histocompatibility complex (MHC). When T-cell receives the message about the peptides bound to MHC, it makes the immune system active and results in the disposal of the immunogen. The antigenic determinants recognized and bound by the T-cell receptor is known as T-cell epitope. The accurate prediction of T-cell epitopes is crucial for vaccine development and clinical immunology. For the first time we developed new models using least squares support vector machine (LSSVM) and amino acid properties for T-cell epitopes prediction. A dataset including 203 short peptides (167 non-epitopes and 36 epitopes) was used as the input dataset and it was randomly divided into a training set and a test set. The models based on LSSVM and amino acid properties were evaluated using leave-one-out cross-validation method and the predictive ability of the test set, and obtained the results of 0.9875 and 0.9734 under the ROC curves, respectively. This result is more satisfactory than that were reported before. Especially, the accuracy of true positive gets a marked enhancement.  相似文献   

4.
We introduce a family of positive definite kernels specifically optimized for the manipulation of 3D structures of molecules with kernel methods. The kernels are based on the comparison of the three-point pharmacophores present in the 3D structures of molecules, a set of molecular features known to be particularly relevant for virtual screening applications. We present a computationally demanding exact implementation of these kernels, as well as fast approximations related to the classical fingerprint-based approaches. Experimental results suggest that this new approach is competitive with state-of-the-art algorithms based on the 2D structure of molecules for the detection of inhibitors of several drug targets.  相似文献   

5.
6.
Nonlinear kernel methods have been widely used to deal with nonlinear problems in latent variable methods. However, in the presence of structured noise, these methods have reduced efficacy. We have previously introduced constrained latent variable methods that make use of any available additional knowledge about the structured noise. These methods improve performance by introducing additional constraints into the algorithm. In this paper, we build upon our previous work and introduce hard‐constrained and soft‐constrained nonlinear partial least squares methods using nonlinear kernels. The addition of nonlinear kernels reduces the effects of structured noise in nonlinear spaces and improves the regression performance between the input and response variables. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Optimized sample-weighted partial least squares   总被引:2,自引:0,他引:2  
Lu Xu 《Talanta》2007,71(2):561-566
In ordinary multivariate calibration methods, when the calibration set is determined to build the model describing the relationship between the dependent variables and the predictor variables, each sample in the calibration set makes the same contribution to the model, where the difference of representativeness between the samples is ignored. In this paper, by introducing the concept of weighted sampling into partial least squares (PLS), a new multivariate regression method, optimized sample-weighted PLS (OSWPLS) is proposed. OSWPLS differs from PLS in that it builds a new calibration set, where each sample in the original calibration set is weighted differently to account for its representativeness to improve the prediction ability of the algorithm. A recently suggested global optimization algorithm, particle swarm optimization (PSO) algorithm is used to search for the best sample weights to optimize the calibration of the original training set and the prediction of an independent validation set. The proposed method is applied to two real data sets and compared with the results of PLS, the most significant improvement is obtained for the meat data, where the root mean squared error of prediction (RMSEP) is reduced from 3.03 to 2.35. For the fuel data, OSWPLS can also perform slightly better or no worse than PLS for the prediction of the four analytes. The stability and efficiency of OSWPLS is also studied, the results demonstrate that the proposed method can obtain desirable results within moderate PSO cycles.  相似文献   

8.
Kernel partial least squares (KPLS) has become a popular technique for regression and classification of complex data sets, which is a nonlinear extension of linear PLS in which training samples are transformed into a feature space via a nonlinear mapping. The PLS algorithm can then be carried out in the feature space. In the present study, we attempt to develop a novel tree KPLS (TKPLS) classification algorithm by constructing an informative kernel on the basis of decision tree ensembles. The constructed tree kernel can effectively discover the similarities of samples and select informative features by variable importance ranking in the process of building the kernel. Simultaneously, TKPLS can also handle nonlinear relationships in the structure–activity relationship data by such a kernel. Finally, three data sets related to different categorical bioactivities of compounds are used to evaluate the performance of TKPLS. The results show that the TKPLS algorithm can be regarded as an alternative and promising classification technique. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
With the aim of developing a nonlinear tool for near-infrared spectral (NIRS) calibration, an applicable algorithm, called MIKPLS, is designed based on the combination of two different strategies, i.e. mutual information (MI) for interval selection and kernel partial least squares (KPLS) for modeling. Due to the ability of capturing linear and nonlinear dependencies between variables simultaneously, mutual information between each candidate variables and target is calculated and employed to induce a continuous wavelength interval, which is subsequently applied to build a parsimonious calibration model for future use by kernel partial least squares. Through the experiments on two datasets, it seems that mutual information (MI)-induced interval selection, followed by KPLS, forms a very simple and practical tool, allowing a prediction model to be constructed using a much-reduced set of neighboring variables, but without any loss of generalizations and with improved prediction performance instead.  相似文献   

10.
Ren S  Gao L 《The Analyst》2011,136(6):1252-1261
This paper suggests a novel method named DF-LS-SVM, which is based on least squares support vector machines (LS-SVM) regression combined with data fusion (DF) to enhance the ability to extract characteristic information and improve the quality of the regression. Simultaneous multicomponent determination of Fe(III), Co(II) and Cu(II) was conducted for the first time by using the proposed method. Data fusion is a technique that integrates information from disparate sources to produce a single model or decision. The LS-SVM technique allows for learning a high-dimensional feature with fewer training data, and reduces the computational complexity by only requiring the solution of a set of linear equations instead of a quadratic programming problem. Experimental results showed that the DF-LS-SVM method was successful for simultaneous multicomponent determination even when severe overlap of spectra existed. The DF-LS-SVM method is an attractive and promising hybrid approach that combines the best properties of the two techniques. The results obtained from an additional test case, simultaneous differential pulse voltammetric determination of o-nitrophenol, m-nitrophenol and p-nitrophenol, also demonstrated that the DF-LS-SVM method performed somewhat better than LS-SVM and PLS methods.  相似文献   

11.

Note

Correction to Extending the trend vector: The trend matrix and sample-based partial least squares  相似文献   

12.
Two alternative partial least squares (PLS) methods, averaged PLS and weighted average PLS, are proposed and compared with the classical PLS in terms of root mean square error of prediction (RMSEP) for three real data sets. These methods compute the (weighted) average of PLS models with different complexity. The prediction abilities of the alternative methods are comparable to that of the classical PLS but they do not require to determine how many components should be included in the model. They are also more robust in the sense that the quality of prediction depends less on a good choice of the number of components to be included. In addition, weighted average PLS is also compared with the weighted average part of LOCAL, a published method that also applies weighted average PLS, with however an entirely different weighting scheme.  相似文献   

13.
From the fundamental parts of PLS‐DA, Fisher's canonical discriminant analysis (FCDA) and Powered PLS (PPLS), we develop the concept of powered PLS for classification problems (PPLS‐DA). By taking advantage of a sequence of data reducing linear transformations (consistent with the computation of ordinary PLS‐DA components), PPLS‐DA computes each component from the transformed data by maximization of a parameterized Rayleigh quotient associated with FCDA. Models found by the powered PLS methodology can contribute to reveal the relevance of particular predictors and often requires fewer and simpler components than their ordinary PLS counterparts. From the possibility of imposing restrictions on the powers available for optimization we obtain an explorative approach to predictive modeling not available to the traditional PLS methods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
The least squares support vector machines (LS-SVM) was used to model infrared spectral data for TSH hormone secreted by thyroid, which regulates the basal metabolic rate. This model was used for direct estimation of the content of TSH in blood serum samples, and the results were comparable with those obtained with the conventional analytical method based on chemoluminescence methodology. Excellent agreement was observed between the conventional method and the newly developed calibration model based in analysis of spectral data with LS-SVM. The latter has clear advantages, because it is fast and requires no reagent once the measurements were done directly in the serum by using a simple mid-infrared spectrometer in the ATR mode. An important advantage observed in this calibration method based on LS-SVM is the remarkable capacity to avoid overfitting in the model-building step, that is, the developed method is highly robust.  相似文献   

15.
The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain significantly smaller numbers of informative variables than the existing SVR-PPRV, UVE-GA-PLS and UVE-iPLS methods without loss of prediction ability. Contrary to UVE-GA-PLS and UVE-iPLS, there is no variability in the number of retained variables in each PPRV(R) method. Renewed variable ranking, after deletion of a variable, followed by remodelling, combined with the possibility to decrease the PLS model complexity, is beneficial. A preferred PPRVR-CAM method is proposed.  相似文献   

16.
17.
In this paper, fault detection and identification methods based on semi‐supervised Laplacian regularization kernel partial least squares (LRKPLS) are proposed. In Laplacian regularization learning framework, unlabeled and labeled samples are used to improve estimate of data manifold so that one can establish a more robust data model. We show that LRKPLS can avoid the over‐fitting problem which may be caused by sample insufficient and outliers present. Moreover, the proposed LRKPLS approach has no special restriction on data distribution, in other words, it can be used in the case of nonlinear or non‐Gaussian data. On the basis of LRKPLS, corresponding fault detection and identification methods are proposed. Those methods are used to monitor a numerical example and Hot Galvanizing Pickling Waste Liquor Treatment Process (HGPWLTP), and the cases study show effeteness of the proposed approaches. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
Multi-way partial least squares modeling of water quality data   总被引:1,自引:0,他引:1  
A 10 years surface water quality data set pertaining to a polluted river was analyzed using partial least squares (PLS) regression models. Both the unfold-PLS and N-PLS (tri-PLS and quadri-PLS) models were calibrated through leave-one out cross-validation method. These were applied to the multivariate, multi-way data array with a view to assess and compare their predictive capabilities for biochemical oxygen demand (BOD) of river water in terms of their relative mean squares error of cross-validation, prediction and variance captured. The sum of squares of residuals and leverages were computed and analyzed to identify the sites, variables, years and months which may have influence on the constructed model. Both the tri- and quadri-PLS models yielded relatively low validation error as compared to unfold-PLS and captured high variance in model. Moreover, both of these methods produced acceptable model precision and accuracy. In case of tri-PLS the root mean squares errors were 1.65 and 2.17 for calibration and prediction, respectively; whereas these were 2.58 and 1.09 for quadri-PLS. At a preliminary level it seems that BOD can be predicted but a different data arrangement is needed. Moreover, analysis of the scores and loadings plots of the N-PLS models could provide information on time evolution of the river water quality.  相似文献   

19.
20.
Partial least squares (PLS) regression is a linear regression technique developed to relate many regressors to one or several response variables. Robust methods are introduced to reduce or remove the effect of outlying data points. In this paper, we show that if the sample covariance matrix is properly robustified further robustification of the linear regression steps of the PLS algorithm becomes unnecessary. The robust estimate of the covariance matrix is computed by searching for outliers in univariate projections of the data on a combination of random directions (Stahel—Donoho) and specific directions obtained by maximizing and minimizing the kurtosis coefficient of the projected data, as proposed by Peña and Prieto [1]. It is shown that this procedure is fast to apply and provides better results than other methods proposed in the literature. Its performance is illustrated by Monte Carlo and by an example, where the algorithm is able to show features of the data which were undetected by previous methods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号