首页 | 本学科首页   官方微博 | 高级检索  
     


Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
Authors:Sayan Mukherjee  Partha Niyogi  Tomaso Poggio  Ryan Rifkin
Affiliation:(1) Center for Biological and Computational Learning, Artificial Intelligence Laboratory, and McGovern Institute, USA;(2) MIT/Whitehead Institute, Center for Genome Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA;(3) Department of Computer Science and Statistics, University of Chicago, Chicago, IL 60637, USA;(4) Honda Research Institute, Boston, MA 02111, USA
Abstract:Solutions of learning problems by Empirical Risk Minimization (ERM) – and almost-ERM when the minimizer does not exist – need to be consistent, so that they may be predictive. They also need to be well-posed in the sense of being stable, so that they might be used robustly. We propose a statistical form of stability, defined as leave-one-out (LOO) stability. We prove that for bounded loss classes LOO stability is (a) sufficient for generalization, that is convergence in probability of the empirical error to the expected error, for any algorithm satisfying it and, (b) necessary and sufficient for consistency of ERM. Thus LOO stability is a weak form of stability that represents a sufficient condition for generalization for symmetric learning algorithms while subsuming the classical conditions for consistency of ERM. In particular, we conclude that a certain form of well-posedness and consistency are equivalent for ERM. Dedicated to Charles A. Micchelli on his 60th birthday Mathematics subject classifications (2000) 68T05, 68T10, 68Q32, 62M20. Tomaso Poggio: Corresponding author.
Keywords:stability  inverse problems  generalization  consistency  empirical risk minimization  uniform Glivenko–  Cantelli
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号