首页 | 本学科首页   官方微博 | 高级检索  
     


Scalable and Transferable Reinforcement Learning for Multi-Agent Mixed Cooperative–Competitive Environments Based on Hierarchical Graph Attention
Authors:Yining Chen  Guanghua Song  Zhenhui Ye  Xiaohong Jiang
Affiliation:1.School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027, China;2.College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; (Z.Y.); (X.J.)
Abstract:
Most previous studies on multi-agent systems aim to coordinate agents to achieve a common goal, but the lack of scalability and transferability prevents them from being applied to large-scale multi-agent tasks. To deal with these limitations, we propose a deep reinforcement learning (DRL) based multi-agent coordination control method for mixed cooperative–competitive environments. To improve scalability and transferability when applying in large-scale multi-agent systems, we construct inter-agent communication and use hierarchical graph attention networks (HGAT) to process the local observations of agents and received messages from neighbors. We also adopt the gated recurrent units (GRU) to address the partial observability issue by recording historical information. The simulation results based on a cooperative task and a competitive task not only show the superiority of our method, but also indicate the scalability and transferability of our method in various scale tasks.
Keywords:multi-agent   deep reinforcement learning   partial observability
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号