Implicit Negotiation in Repeated Games (2001)
Michael L. Littman and Peter Stone
In business-related interactions such as the on-going high-stakes FCC spectrum auctions, explicit communication among participants is regarded as collusion, and is therefore illegal. In this paper, we consider the possibility of autonomous agents engaging in implicit negotiation via their tacit interactions. In repeated general-sum games, our testbed for studying this type of interaction, an agent using a ``best response'' strategy maximizes its own payoff assuming its behavior has no effect on its opponent. This notion of best response requires some degree of learning to determine the fixed opponent behavior. Against an unchanging opponent, the best-response agent performs optimally, and can be thought of as a ``follower, '' since it adapts to its opponent. However, pairing two best-response agents in a repeated game can result in suboptimal behavior. We demonstrate this suboptimality in several different games using variants of Q-learning as an example of a best-response strategy. We then examine two ``leader'' strategies that induce better performance from opponent followers via stubbornness and threats. These tactics are forms of implicit negotiation in that they aim to achieve a mutually beneficial outcome without using explicit communication outside of the game.
View:
PDF, PS, HTML
Citation:
In Proceedings of The Eighth International Workshop on Agent Theories, Architectures, and Languages (ATAL-2001), pp. 393-404, August 2001.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu