首页 | 本学科首页   官方微博 | 高级检索  
     


PAC-learning a decision tree with pruning
Affiliation:1. School of Engineering and Information Technology, UNSW Canberra, ACT 2612, Australia;2. School of Computer Science and Engineering, Nanyang Technological University Singapore, 639798, Singapore;3. Department of Knowledge Based Mathematical Systems, Johannes Kepler University, Linz, Austria;1. DIBRIS Department, University of Genova, Via Opera Pia 13, Genova I-16145, Italy;2. DITEN Department, University of Genova, Via Opera Pia 11A, Genova I-16145, Italy;1. Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d Optique Graduate School, Laboratoire Hubert Curien UMR 5516, Saint-Etienne F-42023, France;2. Département d’Informatique et de Génie Logiciel, Université Laval, Québec, Canada;3. Inria Lille – Nord Europe, Modal Project-Team, Villeneuve d’Ascq 59650, France
Abstract:Empirical studies have shown that the performance of decision tree induction usually improves when the trees are pruned. Whether these results hold in general and to what extent pruning improves the accuracy of a concept have not been investigated theoretically. This paper provides a theoretical study of pruning. We focus on a particular type of pruning and determine a bound on the error due to pruning. This is combined with PAC (probably approximately correct) learning theory to determine a sample size sufficient to guarantee a probabilistic bound on the concept error. We also discuss additional pruning rules and give an analysis for the pruning error.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号