Research on Neuroevolution Methods

In difficult real-world learning tasks such as controlling robots, playing games, or pursuing or evading an enemy, there are no direct targets that would specify correct actions for each situation. In such problems, optimal behavior must be learned by exploring different actions, and assigning credit for good decisions based on sparse reinforcement feedback.

Our research in this area focuses on methods for evolving Neural Networks with Genetic Algorithms, i.e. Evolutionary Reinforcement Learning, or Neuroevolution. Compared to the standard Reinforcement Learning , Neuro-Evolution is often more robust against noisy and incomplete input, and allows representing continuous states and actions naturally. Our methods include utilizing subpopulations, population statistics, and knowledge in the population, and evolving network structure. Much of this research involves comparisons of neuroevolution to traditional methods in several benchmark tasks such as pole balancing and mobile robot control.

This research is supported in part by the National Science Foundation under grant IIS-0083776 (and previously under IRI-9504317) and the Texas Higher Education Coordinating Board under grant ARP-003658-476-2001. Most of our projects are described below; for more details and for other projects, see publications in Neuroevolution Methods. For related projects, see Neuroevolution Applications and Reinforcement Learning .

Back to Research Projects
Back to UTCS Neural Networks home page
Last update: 1.42 2003/03/30 20:30:13 nate