首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
This paper investigates the effects of the ratio of positive-to-negative samples on the sensitivity, specificity, and concordance. When the class sizes in the training samples are not equal, the classification rule derived will favor the majority class and result in a low sensitivity on the minority class prediction. We propose an ensemble classification approach to adjust for differential class sizes in a binary classifier system. An ensemble classifier consists of a set of base classifiers; its prediction rule is based on a summary measure of individual classifications by the base classifiers. Two re-sampling methods, augmentation and abatement, are proposed to generate different bootstrap samples of equal class size to build the base classifiers. The augmentation method balances the two class sizes by bootstrapping additional samples from the minority class, whereas the abatement method balances the two class sizes by sampling only a subset of samples from the majority class. The proposed procedure is applied to a data set to predict estrogen receptor binding activity and to a data set to predict animal liver carcinogenicity using SAR (structure-activity relationship) models as base classifiers. The abatement method appears to perform well in balancing sensitivity and specificity.  相似文献   

2.
Computational models to predict the developmental toxicity of compounds are built on imbalanced datasets wherein the toxicants outnumber the non-toxicants. Consequently, the results are biased towards the majority class (toxicants). To overcome this problem and to obtain sensitive but also accurate classifiers, we followed an integrated approach wherein (i) Synthetic Minority Over Sampling (SMOTE) is used for re-sampling, (ii) genetic algorithm (GA) is used for variable selection and (iii) support vector machines (SVM) is used for model development. The best model, M3, has (i) sensitivity (SE) = 85.54% and specificity (SP) = 85.62% in leave-one-out validation, (ii) classification accuracy of the training set = 99.67%, (iii) classification accuracy of the test set = 92.59%; and (iv) sensitivity = 92.68, specificity = 92.31 on the test set. Consensus prediction based on models M3–M5 improved these percentages by 5% over M3. From the analysis of results we infer that data imbalance in toxicity studies can be effectively addressed by the application of re-sampling techniques.  相似文献   

3.
4.
Hui Chen  Zan Lin  Lin Mo 《Analytical letters》2017,50(16):2608-2618
Rapid and objective detection of cancer is crucial for successful treatment. Near-infrared (NIR) spectroscopy is a vibrational technique capable of optically probing molecular changes associated with disease. The purpose of this study was to explore NIR spectroscopy for discriminating cancer from normal colorectal tissues. A total of 110 tissue samples from patients who underwent operations were characterized in this study. The popular ensemble technique AdaBoost was used to construct the diagnostic model. A decision stump was used as the weak learning algorithm. Adaboost with decision stump, an ensemble of weak classifiers, was compared with the most suitable single model, a strong classifier. Only the 20 most significant variables were selected as inputs for the model based on measured defined variable importance. Using an independent test set, the single strong classifier provided diagnostic accuracy of 89.1%, sensitivity of 100%, and specificity of 78.6%, whereas the ensemble of weak stumps provided accuracy of 96.3%, sensitivity of 96.3%, and specificity of 96.3% for distinguishing cancer from normal colorectal tissues. Therefore, NIR spectroscopy in combination with AdaBoost with decision stumps has demonstrated potential for rapid and objective diagnosis of colorectal cancer.  相似文献   

5.
6.
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.  相似文献   

7.
Class prediction based on DNA microarray data has been emerged as one of the most important application of bioinformatics for diagnostics/prognostics. Robust classifiers are needed that use most biologically relevant genes embedded in the data. A consensus approach that combines multiple classifiers has attributes that mitigate this difficulty compared to a single classifier. A new classification method named as consensus analysis of multiple classifiers using non-repetitive variables (CAMCUN) was proposed for the analysis of hyper-dimensional gene expression data. The CAMCUN method combined multiple classifiers, each of which was built from distinct, non-repeated genes that were selected for effectiveness in class differentiation. Thus, the CAMCUN utilized most biologically relevant genes in the final classifier. The CAMCUN algorithm was demonstrated to give consistently more accurate predictions for two well-known datasets for prostate cancer and leukemia. Importantly, the CAMCUN algorithm employed an integrated 10-fold cross-validation and randomization test to assess the degree of confidence of the predictions for unknown samples.  相似文献   

8.
Drug-induced liver injury, although infrequent, is an important safety concern that can lead to fatality in patients and failure in drug developments. In this study, we have used an ensemble of mixed learning algorithms and mixed features for the development of a model to predict hepatic effects. This robust method is based on the premise that no single learning algorithm is optimum for all modelling problems. An ensemble model of 617 base classifiers was built from a diverse set of 1,087 compounds. The ensemble model was validated internally with five-fold cross-validation and 25 rounds of y-randomization. In the external validation of 120 compounds, the ensemble model had achieved an accuracy of 75.0%, sensitivity of 81.9% and specificity of 64.6%. The model was also able to identify 22 of 23 withdrawn drugs or drugs with black box warning against hepatotoxicity. Dronedarone which is associated with severe liver injuries, announced in a recent FDA drug safety communication, was predicted as hepatotoxic by the ensemble model. It was found that the ensemble model was capable of classifying positive compounds (with hepatic effects) well, but less so on negatives compounds when they were structurally similar. The ensemble model built in this study is made available for public use.  相似文献   

9.
In biospectroscopy, suitably annotated and statistically independent samples (e.g. patients, batches, etc.) for classifier training and testing are scarce and costly. Learning curves show the model performance as function of the training sample size and can help to determine the sample size needed to train good classifiers. However, building a good model is actually not enough: the performance must also be proven. We discuss learning curves for typical small sample size situations with 5–25 independent samples per class. Although the classification models achieve acceptable performance, the learning curve can be completely masked by the random testing uncertainty due to the equally limited test sample size. In consequence, we determine test sample sizes necessary to achieve reasonable precision in the validation and find that 75–100 samples will usually be needed to test a good but not perfect classifier. Such a data set will then allow refined sample size planning on the basis of the achieved performance. We also demonstrate how to calculate necessary sample sizes in order to show the superiority of one classifier over another: this often requires hundreds of statistically independent test samples or is even theoretically impossible. We demonstrate our findings with a data set of ca. 2550 Raman spectra of single cells (five classes: erythrocytes, leukocytes and three tumour cell lines BT-20, MCF-7 and OCI-AML3) as well as by an extensive simulation that allows precise determination of the actual performance of the models in question.  相似文献   

10.
Drug-target interaction (DTI) prediction through in vitro methods is expensive and time-consuming. On the other hand, computational methods can save time and money while enhancing drug discovery efficiency. Most of the computational methods frame DTI prediction as a binary classification task. One important challenge is that the number of negative interactions in all DTI-related datasets is far greater than the number of positive interactions, leading to the class imbalance problem. As a result, a classifier is trained biased towards the majority class (negative class), whereas the minority class (interacting pairs) is of interest. This class imbalance problem is not widely taken into account in DTI prediction studies, and the few previous studies considering balancing in DTI do not focus on the imbalance issue itself. Additionally, they do not benefit from deep learning models and experimental validation. In this study, we propose a computational framework along with experimental validations to predict drug-target interaction using an ensemble of deep learning models to address the class imbalance problem in the DTI domain. The objective of this paper is to mitigate the bias in the prediction of DTI by focusing on the impact of balancing and maintaining other involved parameters at a constant value. Our analysis shows that the proposed model outperforms unbalanced models with the same architecture trained on the BindingDB both computationally and experimentally. These findings demonstrate the significance of balancing, which reduces the bias towards the negative class and leads to better performance. It is important to note that leaning on computational results without experimentally validating them and by relying solely on AUROC and AUPRC metrics is not credible, particularly when the testing set remains unbalanced.  相似文献   

11.
A hierarchical scheme has been developed for detection of bovine spongiform encephalopathy (BSE) in serum on the basis of its infrared spectral signature. In the first stage, binary subsets between samples originating from diseased and non-diseased cattle are defined along known covariates within the data set. Random forests are then used to select spectral channels on each subset, on the basis of a multivariate measure of variable importance, the Gini importance. The selected features are then used to establish binary discriminations within each subset by means of ridge regression. In the second stage of the hierarchical procedure the predictions from all linear classifiers are used as input to another random forest that provides the final classification. When applied to an independent, blinded validation set of 160 further spectra (84 BSE-positives, 76 BSE-negatives), the hierarchical classifier achieves a sensitivity of 92% and a specificity of 95%. Compared with results from an earlier study based on the same data, the hierarchical scheme performs better than linear discriminant analysis with features selected by genetic optimization and robust linear discriminant analysis, and performs as well as a neural network and a support vector machine.  相似文献   

12.
Real-world applications will inevitably entail divergence between samples on which chemometric classifiers are trained and the unknowns requiring classification. This has long been recognized, but there is a shortage of empirical studies on which classifiers perform best in ‘external validation’ (EV), where the unknown samples are subject to sources of variation relative to the population used to train the classifier. Survey of 286 classification studies in analytical chemistry found only 6.6% that stated elements of variance between training and test samples. Instead, most tested classifiers using hold-outs or resampling (usually cross-validation) from the same population used in training. The present study evaluated a wide range of classifiers on NMR and mass spectra of plant and food materials, from four projects with different data properties (e.g., different numbers and prevalence of classes) and classification objectives. Use of cross-validation was found to be optimistic relative to EV on samples of different provenance to the training set (e.g., different genotypes, different growth conditions, different seasons of crop harvest). For classifier evaluations across the diverse tasks, we used ranks-based non-parametric comparisons, and permutation-based significance tests. Although latent variable methods (e.g., PLSDA) were used in 64% of the surveyed papers, they were among the less successful classifiers in EV, and orthogonal signal correction was counterproductive. Instead, the best EV performances were obtained with machine learning schemes that coped with the high dimensionality (914–1898 features). Random forests confirmed their resilience to high dimensionality, as best overall performers on the full data, despite being used in only 4.5% of the surveyed papers. Most other machine learning classifiers were improved by a feature selection filter (ReliefF), but still did not out-perform random forests.  相似文献   

13.
Near-infrared spectroscopy has gained great acceptance in the industry due to its multiple applications and versatility. Sometimes, however, the construction of accurate and robust calibration models involves the collection of a large number of samples with related reference analysis that can complicate and prolong the calibration stage.In this paper, ensemble methods and data augmentation by noise simulation have been applied to spectroscopic data in combination with PLSR to obtain robust models able to handle different types of perturbations likely to affect NIR data. Several types of noise have been investigated as well as different ensemble methods focused on obtaining robust PLS models able to predict both the original and the perturbed test data.The suitability of ensemble methods to perform robust calibration models has been investigated and compared to extended multiplicative signal correction (EMSC) and other calibration approaches in a real case of temperature compensation. Extended multiplicative signal correction (EMSC) and ensemble methods seem to be the most appropriate methods yielding the best results in terms of accuracy and prediction ability with a reduced calibration data set.  相似文献   

14.
Analysis of DNA sequences isolated directly from the environment, known as metagenomics, produces a large quantity of genome fragments that need to be classified into specific taxa. Most composition-based classification methods use all features instead of a subset of features that may maximize classifier accuracy. We show that feature selection methods can boost performance of taxonomic classifiers. This work proposes three different filter-based feature selection methods that stem from information theory: (1) a technique that combines Kullback-Leibler, Mutual Information, and distance information, (2) a text mining technique, TF-IDF, and (3) minimum redundancy-maximum-relevance (mRMR). The feature selection methods are compared by how well they improve support vector machine classification of genomic reads. Overall, the 6mer mRMR method performs well, especially on the phyla-level. If the number of total features is very large, feature selection becomes difficult because a small subset of features that captures a majority of the data variance is less likely to exist. Therefore, we conclude that there is a trade-off between feature set size and feature selection method to optimize classification performance. For larger feature set sizes, TF-IDF works better for finer-resolutions while mRMR performs the best out of any method for N=6 for all taxonomic levels.  相似文献   

15.
16.
Preterm birth, defined as a delivery before 37 weeks’ gestation, continues to affect 8–15% of all pregnancies and is associated with significant neonatal morbidity and mortality. Effective prediction of timing of delivery among women identified to be at significant risk for preterm birth would allow proper implementation of prophylactic therapeutic interventions. This paper aims first to develop a model that acts as a decision support system for pregnant women at high risk of delivering prematurely before having cervical cerclage. The model will predict whether the pregnancy will continue beyond 26 weeks’ gestation and the potential value of adding the cerclage in prolonging the pregnancy. The second aim is to develop a model that predicts the timing of spontaneous delivery in this high risk cohort after cerclage. The model will help treating physicians to define the chronology of management in relation to the risk of preterm birth, reducing the neonatal complications associated with it. Data from 274 pregnancies managed with cervical cerclage were included. 29 of the procedures involved multiple pregnancies. To build the first model, a data balancing technique called SMOTE was applied to overcome the problem of highly imbalanced class distribution in the dataset. After that, four classification models, namely Decision Tree, Random Forest, K-Nearest Neighbors (K-NN), and Neural Network (NN) were used to build the prediction model. The results showed that Random Forest classifier gave the best results in terms of G-mean and sensitivity with values of 0.96 and 1.00, respectively. These results were achieved at an oversampling ratio of 200%. For the second prediction model, five classification models were used to predict the time of spontaneous delivery; linear regression, Gaussian process, Random Forest, K-star, and LWL classifier. The Random Forest classifier performed best, with 0.752 correlation value. In conclusion, computational models can be developed to predict the need for cerclage and the gestation of delivery after this procedure. These models have moderate/high sensitivity for clinical application.  相似文献   

17.
Representative molecules from 10 classes of prohibited substances were taken from the World Anti-Doping Agency (WADA) list, augmented by molecules from corresponding activity classes found in the MDDR database. Together with some explicitly allowed compounds, these formed a set of 5245 molecules. Five types of fingerprints were calculated for these substances. The random forest classification method was used to predict membership of each prohibited class on the basis of each type of fingerprint, using 5-fold cross-validation. We also used a k-nearest neighbors (kNN) approach, which worked well for the smallest values of k. The most successful classifiers are based on Unity 2D fingerprints and give very similar Matthews correlation coefficients of 0.836 (kNN) and 0.829 (random forest). The kNN classifiers tend to give a higher recall of positives at the expense of lower precision. A na?ve Bayesian classifier, however, lies much further toward the extreme of high recall and low precision. Our results suggest that it will be possible to produce a reliable and quantitative assignment of membership or otherwise of each class of prohibited substances. This should aid the fight against the use of bioactive novel compounds as doping agents, while also protecting athletes against unjust disqualification.  相似文献   

18.
A new in silico model is developed to predict cytochrome P450 2D6 inhibition from 2D chemical structure. Using a diverse training set of 100 compounds with published inhibition constants, an ensemble approach to recursive partitioning is applied to create a large number of classification trees, each of which yields a yes/no prediction about inhibition for a given compound. These binary classifications are combined to provide an overall prediction, which answers the yes/no question about inhibition and provides a measure of confidence about that prediction. Compared to single-tree models, the ensemble approach is less sensitive to noise in the experimental data as well as to changes in the training set. Internal validation tests indicated an overall classification accuracy of 75%, whereas predictions applied to an external set of 51 compounds yielded 80% accuracy, with all inhibitors correctly identified. The speed and 2D nature of this model make it appropriate for high-throughput processing of large chemical libraries, and the confidence level provides a continuous scale on which to prioritize compounds.  相似文献   

19.
DNA microarray data has been widely used in cancer research due to the significant advantage helped to successfully distinguish between tumor classes. However, typical gene expression data usually presents a high-dimensional imbalanced characteristic, which poses severe challenge for traditional machine learning methods to construct a robust classifier performing well on both the minority and majority classes. As one of the most successful feature weighting techniques, Relief is considered to particularly suit to handle high-dimensional problems. Unfortunately, almost all relief-based methods have not taken the class imbalance distribution into account. This study identifies that existing Relief-based algorithms may underestimate the features with the discernibility ability of minority classes, and ignore the distribution characteristic of minority class samples. As a result, an additional bias towards being classified into the majority classes can be introduced. To this end, a new method, named imRelief, is proposed for efficiently handling high-dimensional imbalanced gene expression data. imRelief can correct the bias towards to the majority classes, and consider the scattered distributional characteristic of minority class samples in the process of estimating feature weights. This way, imRelief has the ability to reward the features which perform well at separating the minority classes from other classes. Experiments on four microarray gene expression data sets demonstrate the effectiveness of imRelief in both feature weighting and feature subset selection applications.  相似文献   

20.
Aiming at the prediction of pleiotropic effects of drugs, we have investigated the multilabel classification of drugs that have one or more of 100 different kinds of activity labels. Structural feature representation of each drug molecule was based on the topological fragment spectra method, which was proposed in our previous work. Support vector machine (SVM) was used for the classification and the prediction of their activity classes. Multilabel classification was carried out by a set of the SVM classifiers. The collective SVM classifiers were trained with a training set of 59,180 compounds and validated by another set (validation set) of 29,590 compounds. For a test set that consists of 9,864 compounds, the classifiers correctly classified 80.8% of the drugs into their own active classes. The SVM classifiers also successfully performed predictions of the activity spectra for multilabel compounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号