首页 | 本学科首页   官方微博 | 高级检索  
     


On Early Stopping in Gradient Descent Learning
Authors:Yuan Yao  Lorenzo Rosasco  Andrea Caponnetto
Affiliation:(1) Department of Mathematics, University of California, Berkeley, CA 94720, USA;(2) C.B.C.L., Massachusetts Institute of Technology, Bldg. E25-201, 45 Carleton St., Cambridge, MA 02142, USA;(3) DISI, Universita di Genova, Via Dodecaneso 35, 16146 Genova, Italy
Abstract:In this paper we study a family of gradient descent algorithms to approximate the regression function from reproducing kernel Hilbert spaces (RKHSs), the family being characterized by a polynomial decreasing rate of step sizes (or learning rate). By solving a bias-variance trade-off we obtain an early stopping rule and some probabilistic upper bounds for the convergence of the algorithms. We also discuss the implication of these results in the context of classification where some fast convergence rates can be achieved for plug-in classifiers. Some connections are addressed with Boosting, Landweber iterations, and the online learning algorithms as stochastic approximations of the gradient descent method.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号