Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


DM$^2$: Decentralized Multi-Agent Reinforcement Learning via Distribution Matching

DM$^2$: Decentralized Multi-Agent Reinforcement Learning via Distribution Matching.
Caroline Wang, Ishan Durugkar, Elad Liebman, and Peter Stone.
In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI-23), February 2023.

Download

[PDF]801.2kB  [slides.pdf]3.7MB  [poster.pdf]1.4MB  

Abstract

Current approaches to multi-agent cooperation rely heavily on centralized mechanisms or explicit communication protocols to ensure convergence. This paper studies the problem of distributed multi-agent learning without resorting to centralized components or explicit communication. It examines the use of distribution matching to facilitate the coordination of independent agents. In the proposed scheme, each agent independently minimizes the distribution mismatch to the corresponding component of a target visitation distribution. The theoretical analysis shows that under certain conditions, each agent minimizing its individual distribution mismatch allows the convergence to the joint policy that generated the target distribution. Further, if the target distribution is from a joint policy that optimizes a cooperative task, the optimal policy for a combination of this task reward and the distribution matching reward is the same joint policy. This insight is used to formulate a practical algorithm (DM$^2$), in which each individual agent matches a target distribution derived from concurrently sampled trajectories from a joint expert policy. Experimental validation on the StarCraft domain shows that combining (1) a task reward, and (2) a distribution matching reward for expert demonstrations for the same task, allows agents to outperform a naive distributed baseline. Additional experiments probe the conditions under which expert demonstrations need to be sampled to obtain the learning benefits.

BibTeX Entry

@InProceedings{aaai23-wang,
  author = {Caroline Wang and Ishan Durugkar and Elad Liebman and Peter Stone},
  title = {DM$^2$: Decentralized Multi-Agent Reinforcement Learning via Distribution Matching},
  booktitle = {Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI-23)},
  location = {Washington, D.C.},
  month = {February},
  year = {2023},
  abstract = {Current approaches to multi-agent cooperation rely heavily on 
  centralized mechanisms or explicit communication protocols to ensure 
  convergence. This paper studies the problem of distributed multi-agent 
  learning without resorting to centralized components or explicit 
  communication. It examines the use of distribution matching to facilitate the
   coordination of independent agents. In the proposed scheme, each agent 
  independently minimizes the distribution mismatch to the corresponding 
  component of a target visitation distribution. The theoretical analysis shows
   that under certain conditions, each agent minimizing its individual 
  distribution mismatch allows the convergence to the joint policy that 
  generated the target distribution. Further, if the target distribution is 
  from a joint policy that optimizes a cooperative task, the optimal policy for
   a combination of this task reward and the distribution matching reward is 
  the same joint policy. This insight is used to formulate a practical 
  algorithm (DM$^2$), in which each individual agent matches a target 
  distribution derived from concurrently sampled trajectories from a joint 
  expert policy. Experimental validation on the StarCraft domain shows that 
  combining (1) a task reward, and (2) a distribution matching reward for 
  expert demonstrations for the same task, allows agents to outperform a naive 
  distributed baseline. Additional experiments probe the conditions under which
   expert demonstrations need to be sampled to obtain the learning benefits.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:51