首页 | 本学科首页   官方微博 | 高级检索  
     检索      


KnowRU: Knowledge Reuse via Knowledge Distillation in Multi-Agent Reinforcement Learning
Authors:Zijian Gao  Kele Xu  Bo Ding  Huaimin Wang
Institution:College of Computer, National University of Defense Technology, Changsha 410000, China; (Z.G.); (B.D.); (H.W.)
Abstract:Recently, deep reinforcement learning (RL) algorithms have achieved significant progress in the multi-agent domain. However, training for increasingly complex tasks would be time-consuming and resource intensive. To alleviate this problem, efficient leveraging of historical experience is essential, which is under-explored in previous studies because most existing methods fail to achieve this goal in a continuously dynamic system owing to their complicated design. In this paper, we propose a method for knowledge reuse called “KnowRU”, which can be easily deployed in the majority of multi-agent reinforcement learning (MARL) algorithms without requiring complicated hand-coded design. We employ the knowledge distillation paradigm to transfer knowledge among agents to shorten the training phase for new tasks while improving the asymptotic performance of agents. To empirically demonstrate the robustness and effectiveness of KnowRU, we perform extensive experiments on state-of-the-art MARL algorithms in collaborative and competitive scenarios. The results show that KnowRU outperforms recently reported methods and not only successfully accelerates the training phase, but also improves the training performance, emphasizing the importance of the proposed knowledge reuse for MARL.
Keywords:multi-agent reinforcement learning  knowledge reuse  knowledge distillation
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号