首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Identifying predictive hubs to condense the training set of k-nearest neighbour classifiers
Authors:Ludwig Lausser  Christoph Müssel  Alexander Melkozerov  Hans A Kestler
Institution:1. Research Group Bioinformatics and Systems Biology, Institute of Neural Information Processing, University of Ulm, 89069, Ulm, Germany
2. Department of Television and Control, Tomsk State University of Control Systems and Radioelectronics, Lenin ave. 40, 634050, Tomsk, Russia
Abstract:The $k$ -Nearest Neighbour classifier is widely used and popular due to its inherent simplicity and the avoidance of model assumptions. Although the approach has been shown to yield a near-optimal classification performance for an infinite number of samples, a selection of the most decisive data points can improve the classification accuracy considerably in real settings with a limited number of samples. At the same time, a selection of a subset of representative training samples reduces the required amount of storage and computational resources. We devised a new approach that selects a representative training subset on the basis of an evolutionary optimization procedure. This method chooses those training samples that have a strong influence on the correct prediction of other training samples, in particular those that have uncertain labels. The performance of the algorithm is evaluated on different data sets. Additionally, we provide graphical examples of the selection procedure.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号