首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Feature selection is frequently used as a preprocessing step to machine learning. The removal of irrelevant and redundant information often improves the performance of learning algorithms. This paper is a comparative study of feature selection in drug discovery. The focus is on aggressive dimensionality reduction. Five methods were evaluated, including information gain, mutual information, a chi2-test, odds ratio, and GSS coefficient. Two well-known classification algorithms, Na?ve Bayesian and Support Vector Machine (SVM), were used to classify the chemical compounds. The results showed that Na?ve Bayesian benefited significantly from the feature selection, while SVM performed better when all features were used. In this experiment, information gain and chi2-test were most effective feature selection methods. Using information gain with a Na?ve Bayesian classifier, removal of up to 96% of the features yielded an improved classification accuracy measured by sensitivity. When information gain was used to select the features, SVM was much less sensitive to the reduction of feature space. The feature set size was reduced by 99%, while losing only a few percent in terms of sensitivity (from 58.7% to 52.5%) and specificity (from 98.4% to 97.2%). In contrast to information gain and chi2-test, mutual information had relatively poor performance due to its bias toward favoring rare features and its sensitivity to probability estimation errors.  相似文献   

2.
3.
Analysis of DNA sequences isolated directly from the environment, known as metagenomics, produces a large quantity of genome fragments that need to be classified into specific taxa. Most composition-based classification methods use all features instead of a subset of features that may maximize classifier accuracy. We show that feature selection methods can boost performance of taxonomic classifiers. This work proposes three different filter-based feature selection methods that stem from information theory: (1) a technique that combines Kullback-Leibler, Mutual Information, and distance information, (2) a text mining technique, TF-IDF, and (3) minimum redundancy-maximum-relevance (mRMR). The feature selection methods are compared by how well they improve support vector machine classification of genomic reads. Overall, the 6mer mRMR method performs well, especially on the phyla-level. If the number of total features is very large, feature selection becomes difficult because a small subset of features that captures a majority of the data variance is less likely to exist. Therefore, we conclude that there is a trade-off between feature set size and feature selection method to optimize classification performance. For larger feature set sizes, TF-IDF works better for finer-resolutions while mRMR performs the best out of any method for N=6 for all taxonomic levels.  相似文献   

4.
The evolutionary relationships of organisms are traditionally delineated by the alignment‐based methods using some DNA or protein sequences. In the post‐genome era, the phylogenetics of life could be inferred from many sources such as genomic features, not just from comparison of one or several genes. To investigate the possibility that the physicochemical properties of protein sequences might reflect the phylogenetic ones, an alignment‐free method using a support vector machine (SVM) classifier is implemented to establish the phylogenetic relationships between some protein sequences. There are two types of datasets, namely, the “Enzymatic” (assigned by an EC accession) and “Proteins” used to train the SVM classifiers. By computing the F‐score for feature selection, we find that the classification accuracies of trained SVM classifiers could be significantly enhanced to 84% and 80%, respectively, for the enzymatic and “proteins” datasets classified if the protein sequences are represented with some top 255 features selected. These show that some physicochemical features of amino acid sequences selected are sufficient for inferring the phylogenetic properties of the protein sequences. Moreover, we find that the selected physicochemical features appear to correlate with the physiological characteristic of the taxonomic classes classified. © 2010 Wiley Periodicals, Inc. J Comput Chem, 2010  相似文献   

5.
分别采用支持向量学习机、人工神经网络、调节性逻辑回归和K-最临近等机器学习方法对761个二氢叶酸还原酶抑制剂建立了其活性分类预测模型. 采用组成描述符和拓扑描述符表征抑制剂的分子结构及物理化学性质, 使用Kennard-Stone方法进行训练集的设计, 并用Metropolis Monte Carlo模拟退火方法作变量选择. 结果表明, 支持向量学习机优于其它机器学习方法, 所得到的最优模型具有较好的预测结果, 其预测正确率为91.62%. 说明通过合适的训练集设计及变量选择, 支持向量学习机方法可以很好地用于二氢叶酸还原酶抑制剂的活性分类预测.  相似文献   

6.
ECS: an automatic enzyme classifier based on functional domain composition   总被引:2,自引:1,他引:1  
Classification for enzymes is a prerequisite for understanding their function. Here, an automatic enzyme identifier based on support vector machine (SVM) with feature vectors from protein functional domain composition was built to identify enzymes and further a classifier to classify enzymes into six different classes: oxidoreductase, transferase, hydrolase, lyase, isomerase and ligase. Jackknife cross-validation test was adopted to evaluate the performance of our classifier. The 86.03% success rate achieved for enzyme/non-enzyme identification and 91.32% for enzyme classification, which is much better than that of the BLAST and PSI-BLAST based method, also outperforms several existed works. The results indicate that protein functional domain composition is able to capture the major features which facilitate the identification/classification of proteins, thus demonstrating that our predictor could be a more effective and promising high-throughput method in enzyme research. Moreover, a web-based software Enzyme Classification System (ECS) for identification as well as classification of enzymes can be accessed at: http://pcal.biosino.org/.  相似文献   

7.
High dimensional datasets contain up to thousands of features, and can result in immense computational costs for classification tasks. Therefore, these datasets need a feature selection step before the classification process. The main idea behind feature selection is to choose a useful subset of features to significantly improve the comprehensibility of a classifier and maximize the performance of a classification algorithm. In this paper, we propose a one-per-class model for high dimensional datasets. In the proposed method, we extract different feature subsets for each class in a dataset and apply the classification process on the multiple feature subsets. Finally, we merge the prediction results of the feature subsets and determine the final class label of an unknown instance data. The originality of the proposed model is to use appropriate feature subsets for each class. To show the usefulness of the proposed approach, we have developed an application method following the proposed model. From our results, we confirm that our method produces higher classification accuracy than previous novel feature selection and classification methods.  相似文献   

8.
In this paper a method for the automatic DNA spots classification and extraction of profiles associated in DNA polyacrylamide gel electrophoresis is presented and it integrates the use of image processing techniques and chemometrics tools. A software which implements this method was developed; for feature extraction a combination of a PCA analysis and a C4.5 decision tree were used. To obtain good results in the profile extraction only DNA spots are useful; therefore, it was necessary to solve a two-class classification problem among DNA spots and no-DNA spots. In order to perform the classification process with high velocity, effectiveness and robustness, comparative classification studies among support vector machine (SVM), K-NN and PLS-DA classifiers were made. The best results obtained with the SVM classifier demonstrated the advantages attributed to it in the literature as a two-class classifier. A Sequential Cluster Leader Algorithm and another one developed for the restoration of pattern missing spots were needed to conclude the profiles extraction step. The experimental results show that this method has a very effective computational behavior and effectiveness, and provide a very useful tool to decrease the time and increase the quality of the specialist responses.  相似文献   

9.
杭州老虎洞窑古陶瓷成分的化学计量学研究   总被引:1,自引:0,他引:1  
用支持向量机算法研究了与杭州老虎洞古陶瓷有关的两个断源、断代问题。作为化学计量学的~种新型分类算法,支持向量机在小样本问题上表现出良好的泛化能力,与特征选择方法结合,可以有效处理样本少,特征多的问题。本研究综合利用支持向量机、特征选择算法和其它化学计量学算法研究了杭州凤凰山麓万松岭附近的古窑遗址和“传世哥窑”的断源、断代问题,证明老虎洞窑与郊坛下窑产品截然不同,万松岭附近地面收集瓷片样本是老虎洞窑宋代地层的瓷片滑落所致,而“传世哥窑”样品可能是老虎洞窑元代时的产品。实验表明:支持向量机算法与化学分析相结合可以成为研究古陶瓷断源和断代问题的一种新方法。  相似文献   

10.
流感是一种主要的呼吸道传染病, 在普通人群中有着较高的发病率, 而对于一些年老和高危病人还有较高的死亡率. 研究显示抑制神经氨酸苷酶(NA)可以阻断病毒RNA复制, 因此NA是有效治疗H1N1型流感病毒的重要药物靶标. 通过计算机方法进行虚拟筛选和预测NA抑制剂已经变得越来越重要. 针对酶活性位点进行基于结构的合理药物设计, 开发H1N1 病毒神经氨酸苷酶抑制剂, 已成为药物研究的热点之一. 本文通过多种机器学习方法(支持向量机(SVM)、k-最近相邻法(k-NN)和C4.5决策树(C4.5DT))对已知的神经氨酸苷酶抑制剂(NAIs)与非神经氨酸苷酶抑制剂(non-NAIs)建立分类预测模型. 其中227个结构多样性化合物(72个NAIs与155个non-NAIs)被用于测试分类预测系统, 并用递归变量消除法选择与神经氨酸苷酶抑制剂分类相关的性质描述符以提高预测精度. 本研究对独立验证集的总预测精度为75.9%-92.6%, NA 抑制剂的预测精度为64.3%-78.6%, 非H1N1抑制剂的预测精度为77.5%-97.5%. SVM法给出最好的总预测精度(92.6%). 本研究表明支持向量机等机器学习方法可以有效预测未知数据集中潜在的NA抑制剂, 并有助于发现与其相关的分子描述符.  相似文献   

11.
12.
Proteins are the macromolecules responsible for almost all biological processes in a cell. With the availability of large number of protein sequences from different sequencing projects, the challenge with the scientist is to characterize their functions. As the wet lab methods are time consuming and expensive, many computational methods such as FASTA, PSI-BLAST, DNA microarray clustering, and Nearest Neighborhood classification on protein–protein interaction network have been proposed. Support vector machine is one such method that has been used successfully for several problems such as protein fold recognition, protein structure prediction etc. Cai et al. in 2003 have used SVM for classifying proteins into different functional classes and to predict their function. They used the physico-chemical properties of proteins to represent the protein sequences. In this paper a model comprising of feature subset selection followed by multiclass Support Vector Machine is proposed to determine the functional class of a newly generated protein sequence. To train and test the model for its performance, 32 physico-chemical properties of enzymes from 6 enzyme classes are considered. To determine the features that contribute significantly for functional classification, Sequential Forward Floating Selection (SFFS), Orthogonal Forward Selection (OFS), and SVM Recursive Feature Elimination (SVM-RFE) algorithms are used and it is observed that out of 32 properties considered initially, only 20 features are sufficient to classify the proteins into its functional classes with an accuracy ranging from 91% to 94%. On comparison it is seen that, OFS followed by SVM performs better than other methods. Our model generalizes the existing model to include multiclass classification and to identify most significant features affecting the protein function.  相似文献   

13.
14.
Many gram-negative bacteria use type IV secretion systems to deliver effector molecules to a wide range of target cells. These substrate proteins, which are called type IV secreted effectors (T4SE), manipulate host cell processes during infection, often resulting in severe diseases or even death of the host. Therefore, identification of putative T4SEs has become a very active research topic in bioinformatics due to its vital roles in understanding host-pathogen interactions. PSI-BLAST profiles have been experimentally validated to provide important and discriminatory evolutionary information for various protein classification tasks. In the present study, an accurate computational predictor termed iT4SE-EP was developed for identifying T4SEs by extracting evolutionary features from the position-specific scoring matrix and the position-specific frequency matrix profiles. First, four types of encoding strategies were designed to transform protein sequences into fixed-length feature vectors based on the two profiles. Then, the feature selection technique based on the random forest algorithm was utilized to reduce redundant or irrelevant features without much loss of information. Finally, the optimal features were input into a support vector machine classifier to carry out the prediction of T4SEs. Our experimental results demonstrated that iT4SE-EP outperformed most of existing methods based on the independent dataset test.  相似文献   

15.
16.
Probabilistic support vector machine (SVM) in combination with ECFP_4 (Extended Connectivity Fingerprints) were applied to establish a druglikeness filter for molecules. Here, the World Drug Index (WDI) and the Available Chemical Directory (ACD) were used as surrogates for druglike and nondruglike molecules, respectively. Compared with published methods using the same data sets, the classifier significantly improved the prediction accuracy, especially when using a larger data set of 341 601 compounds, which further pushed the correct classification rates up to 92.73%. On the other hand, most characteristic features for drugs and nondrugs found by the current method were visualized, which might be useful as guiding fragments for de novo drug design and fragment based drug design.  相似文献   

17.
Dimension reduction is a crucial technique in machine learning and data mining, which is widely used in areas of medicine, bioinformatics and genetics. In this paper, we propose a two-stage local dimension reduction approach for classification on microarray data. In first stage, a new L1-regularized feature selection method is defined to remove irrelevant and redundant features and to select the important features (biomarkers). In the next stage, PLS-based feature extraction is implemented on the selected features to extract synthesis features that best reflect discriminating characteristics for classification. The suitability of the proposal is demonstrated in an empirical study done with ten widely used microarray datasets, and the results show its effectiveness and competitiveness compared with four state-of-the-art methods. The experimental results on St Jude dataset shows that our method can be effectively applied to microarray data analysis for subtype prediction and the discovery of gene coexpression.  相似文献   

18.
Protein structural class prediction for low similarity sequences is a significant challenge and one of the deeply explored subjects. This plays an important role in drug design, folding recognition of protein, functional analysis and several other biology applications. In this paper, we worked with two benchmark databases existing in the literature (1) 25PDB and (2) 1189 to apply our proposed method for predicting protein structural class. Initially, we transformed protein sequences into DNA sequences and then into binary sequences. Furthermore, we applied symmetrical recurrence quantification analysis (the new approach), where we got 8 features from each symmetry plot computation. Moreover, the machine learning algorithms such as Linear Discriminant Analysis (LDA), Random Forest (RF) and Support Vector Machine (SVM) are used. In addition, comparison was made to find the best classifier for protein structural class prediction. Results show that symmetrical recurrence quantification as feature extraction method with RF classifier outperformed existing methods with an overall accuracy of 100% without overfitting.  相似文献   

19.
20.
张玉玺  熊庆  杨刚  李梦龙 《分析化学》2007,35(10):1449-1454
对农药质谱信息的研究有助于辅助完成农药残留的鉴别和农药前体化合物的筛选。根据GB 4839-1998,选出4类杀虫剂,其常见化学结构有有机氯类、有机磷类、氨基甲酸酯类和拟除虫菊酯类农药。从NIST2.0质谱数据库中提取针对这4类结构的质谱数据,经过数学变换以及遗传算法和偏最小二乘回归相结合(GA-PLS)特征选择后,确定最优的质谱特征集,最后使用K最邻近法(KNN),支持向量机(SVM),助推法与分类回归树(AdaBoost-CART)构建预测模型。实验表明,SVM和AdaBoost-CART使用仅含有少量的特征组成的最优特征集,可以得到较好的预测结果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号