next up previous
Next: Robotic Soccer Up: Towards Collaborative and Previous: Introduction

Multiagent Learning

 

Multiagent learning is the intersection of Multiagent Systems and Machine Learning, two subfields of Artificial Intelligence (see Figure 1). As described by Weiß, it is ``learning that is done by several agents and that becomes possible only because several agents are present'' [WeißWeiß1995]. In fact, in certain circumstances, the first clause of this definition is not necessary. We claim that it is possible to engage in multiagent learning even if only one agent is actually learning. In particular, if an agent is learning to acquire skills to interact with other agents in its environment, then regardless of whether or not the other agents are learning simultaneously, the agent's learning is multiagent learning. Especially if the learned behavior enables additional multiagent behaviors, perhaps in which more than one agent does learn, the behavior is a multiagent behavior. Notice that this situation certainly satisfies the second clause of Weiß's definition: the learning would not be possible were the agent isolated.

   figure19
Figure 1: Multiagent Learning is at the intersection of Multiagent Systems and Machine Learning, two subfields of Artificial Intelligence.

Traditional Machine Learning typically involves a single agent that is trying to maximize some utility function without any knowledge, or care, of whether or not there are other agents in the environment. Examples of traditional Machine Learning tasks include function approximation, classification, and problem-solving performance improvement given empirical data. Meanwhile, the subfield of Multiagent Systems, as surveyed in [Stone VelosoStone \ Veloso1996c], deals with domains having multiple agents and considers mechanisms for the interaction of independent agents' behaviors. Thus, multiagent learning includes any situation in which an agent learns to interact with other agents, even if the other agents' behaviors are static.

The main justification for considering situations in which only a single agent learns to be multiagent learning is that the learned behavior can often be used as a basis for more complex interactive behaviors. For example, this article reports on the development of a low-level learned behavior in a multiagent domain. Although only a single agent does the learning, the behavior is only possible in the presence of other agents, and, more importantly, it enables the agent to participate in higher-level collaborative and adversarial learning situations. When multiagent learning is accomplished by layering learned behaviors one on top of the other, as in this case, all levels of learning that involve interaction with other agents contribute to, and are a part of, multiagent learning.

There are some previous examples of single agents learning in a multiagent environment which are included in the multiagent learning literature. One of the earliest multiagent learning papers describes a reinforcement learning agent which incorporates information that is gathered by another agent [TanTan1993]. It is considered multiagent learning because the learning agent has a cooperating agent and an adversary agent with which it learns to interact.

Another such example is a negotiation scenario in which one agent learns the negotiating techniques of another using Bayesian Learning methods [Zeng SycaraZeng Sycara1996]. Again, this situation is considered multiagent learning because the learning agent is learning to interact with another agent: the situation only makes sense due to the presence of multiple agents. This example represents a class of multiagent learning in which a learning agent attempts to model other agents.

One final example of multiagent learning in which only one of the agents learns is a training scenario in which a novice agent learns from a knowledgeable agent [ClouseClouse1996]. The novice learns to drive on a simulated race track from an expert agent whose behavior is fixed.

The one thing that all of the above learning systems have in common is that the learning agent is interacting with other agents. Therefore, the learning is only possible due to the presence of these other agents, and it may enable higher-level interactions with these agents. These characteristics define the type of multiagent learning described in this article.



next up previous
Next: Robotic Soccer Up: Towards Collaborative and Previous: Introduction



Peter Stone
Thu Aug 22 12:51:13 EDT 1996