Evolving Cooperation in Multiagent Systems (2007)
Author: Chern Yong
In tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. Using the Multi-agent ESP method, such agents can be effectively evolved in separate networks, rewarded together as a team. This demo shows two examples of evolved behavior in the prey-capture task in a toroidal grid world.

In the role-based animation, the predator agents (red, green, and blue squares) do not sense each other directly. Instead, they learn to coordinate through stigmergy, i.e. through changes in the environment that result from their actions. The red agent has learned the role of a blocker, waiting in the prey s (shown as X) path. The other two are chasers, driving the prey towards the blocker until the prey has nowhere to run (remember the world is a toroid). This kind of role-based cooperation is easier to learn, more robust, and more effective than communication-based cooperation in this task. The team learns behavior similar to a well-trained soccer team, where the players know what to expect from their teammates, making direct communication unnecessary.

In the communication-based animation, the predators broadcast their locations to all other predators; their coordination is therefore based on communication. They predators all first chase the prey vertically, from different directions, forcing it to flee horizontally in the end. At that point, the red agent assumes the behavior of the blocker and the other two chase the prey towards it until it is caught between them (the world wraps around at that point). In this typical behavior of communicating agents, the team members use different strategies at different times. The behavior is more flexible, but harder to learn, and not as robust nor as effective. It resembles play in pickup soccer, where the players have to constantly observe what their teammates are doing and adapt to it.

The conclusion is that role-based cooperation is a surprisingly effective approach in certain multi-agent domains like the prey capture.

Chern Han Yong Masters Alumni cherny [at] nus edu sg
Risto Miikkulainen Faculty risto [at] cs utexas edu
IJCNN-2013 Tutorial on Evolution of Neural Networks 2013
Risto Miikkulainen, To Appear In unpublished. Tutorial slides..
Multiagent Learning through Neuroevolution 2012
Risto Miikkulainen, Eliana Feasley, Leif Johnson, Igor Karpov, Padmini Rajagopalan, Aditya Rawal, and Wesley Tansey, In Advances in Computational Intelligence, J. Liu et al. (Eds.), Vol. LNCS 7311, pp. 24-46, Berlin, Heidelberg: 2012. Springer.
Coevolution of Role-Based Cooperation in Multi-Agent Systems 2010
Chern Han Yong and Risto Miikkulainen, IEEE Transactions on Autonomous Mental Development, Vol. 1 (2010), pp. 170--186.
Coevolution of Role-Based Cooperation in Multi-Agent Systems 2007
Chern Han Yong and Risto Miikkulainen, Technical Report AI07-338, Department of Computer Sciences, The University of Texas at Austin.
Cooperative Coevolution of Multi-Agent Systems 2000
Chern Han Yong, Technical Report HR-00-01, Department of Computer Sciences, The University of Texas at Austin.
ESP C++ The ESP package contains the source code for the Enforced Sup-Populations system written in C++. ESP is an extension t... 2000