next up previous
Next: The Complete Robotic System Up: A Layered Approach to Previous: A Layered Approach to

Introduction

In the past few years, Multiagent Systems (MAS) has emerged as an active subfield of Artificial Intelligence (AI) [20]. Focussing on how AI agents' behaviors can and do interact, MAS applies to a variety of frameworks ranging from information agents to real robots. Because of the inherent complexity of MAS, there is much interest in using Machine Learning (ML) techniques to help deal with this complexity [1, 26].

Robotic soccer is a particularly good domain for studying MAS. It has been gaining popularity in recent years, with international competitions, namely RoboCup and MIROSOT, planned for the near future [22, ]. Robotic soccer can be used as a standard testbed to evaluate different MAS techniques in a straightforward manner: teams implemented with different techniques can play against each other.

The main goal of any testbed is to facilitate the trial and evaluation of ideas that have promise in the real world. A wide variety of MAS issues can be studied in robotic soccer [20]. In this article, we focus on the multiagent learning opportunities that arise in Noda's Soccer Server [12]. The properties of simulated robotic soccer that make it a good testbed for MAS include:

Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. We call this approach layered learning. Because of the complexity of the domain, it is futile to try to learn intelligent behaviors straight from the primitives provided by the server. Instead, we identified useful low-level skills that must be learned before moving on to higher level strategies. Using our own experience and insights to help the clients learn, we acted as human coaches do when they teach young children how to play real soccer.

In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher-level, more ``social,'' skill: one that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. Finally, we verify empirically that the learned skills are applicable to a game-like situation.

Although several more layers are needed, the two learned behavior levels described below will allow us to continue moving upward towards high-level strategy issues. Keeping in mind the many open research issues in Multiagent Learning, we plan to use ML techniques at all stages to help the clients develop their behaviors.

In Section 3, we describe our overall robotic soccer systems of which the strategic level is one part. Section 3 presents some previous work that relates to this article. The RoboCup Soccer Server is described in Section 4. Then, in the body of the paper, Sections 5- 7, we present two learned layers of behavior, verify that they are useful in game situations, and discuss future work. Section 8 summarizes our layered learning approach as presented in this article and concludes.



next up previous
Next: The Complete Robotic System Up: A Layered Approach to Previous: A Layered Approach to



Peter Stone
Mon Mar 31 12:26:29 EST 1997