Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Implicit Negotiation in Repeated Games

Implicit Negotiation in Repeated Games.
Michael L. Littman and Peter Stone.
In Proceedings of The Eighth International Workshop on Agent Theories, Architectures, and Languages (ATAL-2001), pp. 393–404, August 2001.
ATAL-2001

Download

[PDF]181.2kB  [postscript]149.4kB  

Abstract

In business-related interactions such as the on-going high-stakes FCC spectrum auctions, explicit communication among participants is regarded as collusion, and is therefore illegal. In this paper, we consider the possibility of autonomous agents engaging in implicit negotiation via their tacit interactions. In repeated general-sum games, our testbed for studying this type of interaction, an agent using a ``best response'' strategy maximizes its own payoff assuming its behavior has no effect on its opponent. This notion of best response requires some degree of learning to determine the fixed opponent behavior. Against an unchanging opponent, the best-response agent performs optimally, and can be thought of as a ``follower,'' since it adapts to its opponent. However, pairing two best-response agents in a repeated game can result in suboptimal behavior. We demonstrate this suboptimality in several different games using variants of Q-learning as an example of a best-response strategy. We then examine two ``leader'' strategies that induce better performance from opponent followers via stubbornness and threats. These tactics are forms of implicit negotiation in that they aim to achieve a mutually beneficial outcome without using explicit communication outside of the game.

BibTeX Entry

@InProceedings{threats-ATAL2001,
        author = "Michael L. Littman and Peter Stone",
        title = "Implicit Negotiation in Repeated Games",
        booktitle = "Proceedings of The Eighth International Workshop on Agent Theories, Architectures, and Languages (ATAL-2001)",
        month = "August",
        pages="393--404",
        year = "2001",
        abstract={
                  In business-related interactions such as the
                  on-going high-stakes FCC spectrum auctions, explicit
                  communication among participants is regarded as
                  collusion, and is therefore illegal.  In this paper,
                  we consider the possibility of autonomous agents
                  engaging in implicit negotiation via their tacit
                  interactions.  In repeated general-sum games, our
                  testbed for studying this type of interaction, an
                  agent using a ``best response'' strategy maximizes
                  its own payoff assuming its behavior has no effect
                  on its opponent.  This notion of best response
                  requires some degree of learning to determine the
                  fixed opponent behavior.  Against an unchanging
                  opponent, the best-response agent performs
                  optimally, and can be thought of as a ``follower,''
                  since it adapts to its opponent.  However, pairing
                  two best-response agents in a repeated game can
                  result in suboptimal behavior.  We demonstrate this
                  suboptimality in several different games using
                  variants of Q-learning as an example of a
                  best-response strategy.  We then examine two
                  ``leader'' strategies that induce better performance
                  from opponent followers via stubbornness and
                  threats.  These tactics are forms of implicit
                  negotiation in that they aim to achieve a mutually
                  beneficial outcome without using explicit
                  communication outside of the game.
        },
        wwwnote={<a href="http://mas.cs.umass.edu/atal/">ATAL-2001</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 17, 2024 18:42:59