Constructing Intelligent Agents in Simulated Worlds
Active from 2008 - 2010
The project aims at constructing intelligent agents in sophisticated simulated worlds through biologically inspired computation methods. Despite successes in structured domains like board games and medical diagnosis, traditional artificial intelligence (AI) techniques are unlikely to lead to agents that can operate in the physical world around us. The real world is noisy, dynamic, high-dimensional, and only partially observable---very different from the structured worlds where logic and search have been so successful. However, recent increases in computing power provide a new opportunity, for two reasons. First, it is now possible to simulate the physical world in great detail, providing realistic challenges for AI in fully known and controllable environments. Second, biologically inspired computation techniques, such as neural networks, evolutionary computation, and reinforcement learning, have become practical in complex domains. Applying them to realistic simulations is a major step towards building intelligent agents for the real world.

In this project, we aim to take advantage of this opportunity. In the past several years, we have developed methods for evolving neural networks in partially observable Markov decision tasks (e.g. NEAT), and have built a 3D simulation platform where complex behavior can be evaluated (i.e. NERO). In the proposed work, these technologies are brought together to construct intelligent agents.

The challenge is that while NEAT is good at discovering sophisticated low-level control behaviors, it is difficult for it to learn high-level strategic behaviors. Such behaviors often depend on crucial detail in the input and are multimodal, i.e. composed of distinctly different behaviors at different times. The proposed solution is twofold: (1) to evolve networks to construct their own high-level features to represent such detail, and (2) to evolve useful component behaviors and their effective combinations explicitly in two separate populations. This approach has proven viable in preliminary experiments.

In this project, the above two ideas will first be generalized and combined into an integrated neuroevolution learning method. This method will be then tested in several challenging benchmark tasks and compared to other methods for implementing high-level strategic behavior. Finally, the method will be scaled up to a robotic soccer simulation in NERO, where it will be compared with existing hand-coded and learned teams, and evaluated in human-subject experiments. The end result of the project will be a general method for learning strategic high-level behavior, with a thorough experimental understanding of what makes it work and how it compares with other approaches.

The technology developed in the project will be immediately useful for building high-level control systems for robots and virtual agents. Such agents learn effective behavioral strategies and adapt them when the world changes. This ability makes it possible to build video games that are more engaging and entertaining than current games, including games that can serve as training environments for people. In the long term, the technology should lead to safer and more efficient vehicle, traffic, and robotic control, improved process and manufacturing optimization, and more efficient computer and communication systems. In so doing, it will take us a step closer to deploying artificial agents into the real world.

This research is supported by the Texas Higher Education Coordinating Board under grant 003658-0036-2007.

Igor V. Karpov Ph.D. Student ikarpov [at] gmail com
Risto Miikkulainen Faculty risto [at] cs utexas edu
Nate Kohl Ph.D. Alumni nate [at] natekohl net
Jacob Schrum Ph.D. Student schrum2 [at] cs utexas edu
Vinod Valsalam Ph.D. Alumni vkv [at] alumni utexas net
Evolving Multi-modal Behavior in NPCs 2009
Jacob Schrum and Risto Miikkulainen, In IEEE Symposium on Computational Intelligence and Games (CIG 2009), pp. 325--332, Milan, Italy, September 2009. (Best Student Paper Award).
Evolving Neural Networks for Strategic Decision-Making Problems 2009
Nate Kohl and Risto Miikkulainen, Neural Networks, Special issue on Goal-Directed Neural Systems (2009).
Constructing Complex NPC Behavior via Multi-Objective Neuroevolution 2008
Jacob Schrum and Risto Miikkulainen, In Proceedings of the Fourth Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2008), pp. 108-113, Stanford, California 2008.
Evolving Opponent Models for Texas Hold 'Em 2008
Alan J Lockett and Risto Miikkulainen, In IEEE Conference on Computational Intelligence in Games, Perth, Australia 2008.
Transfer of Evolved Pattern-Based Heuristics in Games 2008
Erkin Bahceci and Risto Miikkulainen, In IEEE Symposium On Computational Intelligence and Games (CIG 2008), pp. 220-227, Perth, Australia, December 2008.
Coevolution of Role-Based Cooperation in Multi-Agent Systems 2007
Chern Han Yong and Risto Miikkulainen, Technical Report AI07-338, Department of Computer Sciences, The University of Texas at Austin.
OpenNERO OpenNERO is a general research and education platform for artificial intelligence. The platform is based on a simulatio... 2010

rtNEAT C++ The rtNEAT package contains source code implementing the real-time NeuroEvolution of Augmenting Topologies method. In ad... 2006