Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Balancing Individual Preferences and Shared Objectives in Multiagent Reinforcement Learning

Balancing Individual Preferences and Shared Objectives in Multiagent Reinforcement Learning.
Ishan Durugkar, Elad Liebman, and Peter Stone.
In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), July 2020.
15-minute presentation

Download

[PDF]3.9MB  

Abstract

In multiagent reinforcement learning scenarios, it is often the case that independent agents must jointly learn to perform a cooperative task. This paper focuses on such a scenario in which agents have individual preferences regarding how to accomplish the shared task. We consider a framework for this setting which balances individual preferences against task rewards using a linear mixing scheme. In our theoretical analysis we establish that agents can reach an equilibrium that leads to optimal shared task reward even when they consider individual preferences which are not fully aligned with this task. We then empirically show, somewhat counter-intuitively, that there exist mixing schemes that outperform a purely task-oriented baseline. We further consider empirically how to optimize the mixing scheme.

BibTeX Entry

@InProceedings{IJCAI20-ishand,
  author = {Ishan Durugkar and Elad Liebman and Peter Stone},
  title = {Balancing Individual Preferences and Shared Objectives in
 Multiagent Reinforcement Learning},
  booktitle = {Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)},
  location = {Yokohama Japan},
  month = {July},
  year = {2020},
  abstract = {
          In multiagent reinforcement learning scenarios, it is often the case that
	  independent agents must jointly learn to perform a cooperative task.
	  This paper focuses on such a scenario in which agents have individual
	  preferences regarding how to accomplish the shared task.  We consider
	  a framework for this setting which balances individual preferences
	  against task rewards using a linear mixing scheme.  In our
	  theoretical analysis we establish that agents can reach an
	  equilibrium that leads to optimal shared task reward even when they
	  consider individual preferences which are not fully aligned with this
	  task. We then empirically show, somewhat counter-intuitively, that
	  there exist mixing schemes that outperform a purely task-oriented
	  baseline.  We further consider empirically how to optimize the mixing
	  scheme.
  },
  wwwnote={<a href="https://drive.google.com/file/d/1MkJBfx-qoESPPF6ZC1ZhH-1uXzp63lgK/view?usp=sharing">
15-minute presentation</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:52