As robots become more adept at operating in the real world, the high-level issues of collaborative and adversarial planning and learning in real-time situations are becoming more important. An interesting emerging domain that is particularly appropriate for studying these issues is Robotic Soccer. Although realistic simulation environments exist [Noda1995, Sahota1993] and are useful, it is important to have some actual physical agents in order to address the full complexity of the task.
In this paper we discuss the design decisions we faced along with the impasses we encountered during our quest to build small robots, which call mini-robots, capable of playing Robotic Soccer. We are not the first to build robots for this domain [Asada et al. 1994a, Sahota et al. 1995], however our system is distinctly different from the others about which we know. Furthermore, with at least two Robotic Soccer competitions planned during the next two years [Stone et al. 1996, Kitano et al. 1997], there will surely be several more systems built in the near future. The purpose of this paper is to describe our current mini-robot system in as much detail as possible, so that others may benefit from both our setbacks and our successes. We aim for this paper to render our efforts as replicable as possible.
Although we will describe certain dead-ends we traversed, most of this paper is devoted to describing the current solution, rather than the path by which we came to it. We try to lay out the choices we faced at every step, but the worse decisions will mostly be inferable as the choices which did not lead to the current system. After a good deal of effort, we now have a working system with which up to sixteen cars can be independently controlled. Although our system is still evolving, the current version of our mini-robot system is fully implemented and functional. We find it important to present what we have learned to the Intelligent Robots community at this time.
Our current mini-robotic system is certainly usable for tasks other than Robotic Soccer, but since our main purpose in building the system was to work in the Robotic Soccer domain, we made most of our design decisions with this domain primarily in mind.
Robotic Soccer is an exciting domain for Intelligent Robotics for many reasons. The fast-paced nature of the domain necessitates real-time sensing coupled with quick behaving and decision making. Furthermore, the behaviors and decision making processes can range from the most simple reactive behaviors, such as moving directly towards the ball, to arbitrarily complex reasoning procedures that take into account the actions and perceived strategies of teammates and opponents. Opportunities, and indeed demands, for innovative and novel techniques abound.
A ground-breaking system for Robotic Soccer, and the one that served as the inspiration and basis for our work, is the Dynamo System developed at the University of British Columbia [Sahota et al. 1995]. This system was designed to be capable of supporting several robots per team, but most work has been done in a 1 vs. 1 scenario. Sahota used this system to introduce a decision making strategy called reactive deliberation which was used to choose from among seven hard-wired behaviors [Sahota1994]. Subsequently, Ford used Reinforcement Learning (RL) techniques to choose from among the same hard-wired behaviors [Ford et al. 1994]. Our system differs from the Dynamo system in several ways, most notably our use of infrared (IR) rather than radio waves for communication between the controlling computer and the cars. We also hope to do minimal hard-wiring, instead learning behaviors from the bottom up.
The Robotic Soccer system being developed in Asada's lab is very different from both the Dynamo system and from our own [Asada et al. 1994a, Asada et al. 1994b]. Asada's robots are larger and are equipped with on-board sensing capabilities. They have been used to develop some low-level behaviors such as shooting and avoiding as well as a RL technique for combining behaviors [Asada et al. 1994a, Asada et al. 1994b]. While the goals of this research are very similar to our own, the approach is different. Asada has developed a very nice sophisticated robot system with many advanced capabilities, while we have chosen to focus on producing a simple, robust design that will enable us to concentrate our efforts on learning low-level behaviors and high-level strategies. We believe that both approaches are valuable for advancing the state of the art of Robotic Soccer research.
One of the advantages of the Robotic Soccer domain is that it enables the direct comparison of different systems: they can be matched against each other in competitive tournaments. Systems like Dynamo and Asada's as well as ours and probably many others will come together at least twice in the next two years. In November 1996, there will be a Micro-Robot competition in Taejon, Korea called MIROSOT96. The call for participation is a good example of one possible set of precise specifications for this domain [Stone et al. 1996]. Planning is also in progress for the 1997 robot soccer competition at IJCAI, to be called RoboCup97 [Kitano et al. 1997].
Along with the real robot competition, RoboCup97 will also include a simulator-based tournament using the Soccer Server system designed by Noda [Noda1995]. While we continue working on our real-world system, we have been concurrently developing learning techniques in simulation [Stone and Veloso1995, Stone and Veloso1996]. We eventually hope to transfer these learning techniques to the real system as we develop a complete Robotic Soccer architecture.
The rest of the paper is organized as follows. Section 2 gives an overview of the architecture of the entire Robotic Soccer system. Section 3 gives the detailed description of the existing cars which we view to be the main contribution of this paper. Section 4 draws conclusions from the paper and presents our on-going research agenda.