Evaluating Modular Neuroevolution in Robotic Keepaway Soccer (2012)
Keepaway is a simpler subtask of robot soccer where three 'keepers' attempt to keep possession of the ball while a 'taker' tries to steal it from them. This is a less complex task than full robot soccer, and lends itself well as a testbed for multi-agent systems. This thesis does a comprehensive evaluation of various learning methods using neuroevolution with Enforced Sub-Populations (ESP) with the robocup soccer simulator. Both single and multi-component ESP are evaluated using various learning methods on homo- geneous and heterogeneous teams of agents. In particular, the effectiveness of modularity and task decomposition for evolving keepaway teams is evalu- ated. It is shown that in the robocup soccer simulator, homogeneous agents controlled by monolithic networks perform the best. More complex learning approaches like layered learning, concurrent layered learning and co-evolution decrease the performance as does making the agents heterogeneous. The re- sults are also compared with previous results in the keepaway domain.
Masters Thesis, Department of Computer Science, The University of Texas at Austin. 54 pages.

Anand Subramoney Masters Alumni anands [at] cs utexas edu