首页 | 本学科首页   官方微博 | 高级检索  
     


Improved Information-Theoretic Generalization Bounds for Distributed,Federated, and Iterative Learning
Authors:Leighton Pate Barnes  Alex Dytso  Harold Vincent Poor
Affiliation:1.Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA;2.Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA
Abstract:We consider information-theoretic bounds on the expected generalization error for statistical learning problems in a network setting. In this setting, there are K nodes, each with its own independent dataset, and the models from the K nodes have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of 1/K on the number of nodes. These “per node” bounds are in terms of the mutual information between the training dataset and the trained weights at each node and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node.
Keywords:generalization error   information-theoretic bounds   distribution and federated learning
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号