Many other robotic soccer systems have been developed both in simulation and with real robots. Using a simulator based closely upon the Dynasim system , we previously used Memory-based Learning to allow a player to learn when to shoot and when to pass the ball . We then used Neural Networks to teach a player to shoot a moving ball into the goal . In the soccer server, we then layered two learned behaviors to produce a higher-level multi-agent behavior: passing . Also in the soccer server Matsubara et al. used a Neural Network to allow a player to learn when to shoot and when to pass  (as opposed to the Memory-based technique used by us for a similar task). The RoboCup-97 simulator competition included 29 teams, many of which demonstrated novel scientific contributions, particularly in the field of multi-agent learning .
Robotic soccer with real robots was pioneered by the Dynamo group  and Asada's laboratory . Recent international competitions have motivated the creation of a wide variety of robot soccer teams [14, 16].
Most previous research, both in simulation and on real robots has concentrated on individual skills with little attention paid to team coordination. A rare exception is  in which team coordination is evolved using genetic programming. Unfortunately, no general teamwork structure can be extracted from this work as it is evolved in a domain specific setting.