首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Meta-Strategy for Learning Tuning Parameters with Guarantees
Authors:Dimitri Meunier  Pierre Alquier
Institution:1.Istituto Italiano di Tecnologia, 16163 Genoa, Italy;2.RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan
Abstract:Online learning methods, similar to the online gradient algorithm (OGA) and exponentially weighted aggregation (EWA), often depend on tuning parameters that are difficult to set in practice. We consider an online meta-learning scenario, and we propose a meta-strategy to learn these parameters from past tasks. Our strategy is based on the minimization of a regret bound. It allows us to learn the initialization and the step size in OGA with guarantees. It also allows us to learn the prior or the learning rate in EWA. We provide a regret analysis of the strategy. It allows to identify settings where meta-learning indeed improves on learning each task in isolation.
Keywords:meta-learning  hyperparameters  priors  online learning  Bayesian inference  online optimization  gradient descent
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号