Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team Composition (2021)
Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, and Animashree Anandkumar
In real-world multi-agent systems, agents with different capabilities may join or leave without altering the team’s overarching goals. Coordinating teams with such dynamic composition is challenging: the optimal team strategy varies with the composition. We propose COPA, a coach-player framework to tackle this problem. We assume the coach has a global view of the environment and coordinates the players, who only have partial views, by distributing individual strategies. Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players. We validate our methods on a resource collection task, a rescue game, and the StarCraft micromanagement tasks. We demonstrate zero-shot generalization to new team compositions. Our method achieves comparable or better performance than the setting where all players have a full view of the environment. Moreover, we see that the performance remains high even when the coach communicates as little as 13 percent of the time using the adaptive communication strategy.
View:
PDF
Citation:
In Proceedings of the 38th International Conference on Machine Learning, PMLR 139, 2021 (ICML), Vienna, Austria, July 2021.
Bibtex:

Presentation:
Poster
Peter Stone Faculty pstone [at] cs utexas edu