On-Line Evolutionary Computation for Reinforcement Learning in Stochastic Domains (2006)
Shimon Whiteson and Peter Stone
In emphreinforcement learning, an agent interacting with its environment strives to learn a policy that specifies, for each state it may encounter, what action to take. Evolutionary computation is one of the most promising approaches to reinforcement learning but its success is largely restricted to emphoff-line scenarios. In emphon-line scenarios, an agent must strive to maximize the reward it accrues emphwhile it is learning. emphTemporal difference (TD) methods, another approach to reinforcement learning, naturally excel in on-line scenarios because they have selection mechanisms for balancing the need to search for better policies (emphexploration) with the need to accrue maximal reward (emphexploitation). This paper presents a novel way to strike this balance in evolutionary methods by borrowing the selection mechanisms used by TD methods to choose individual actions and using them in evolution to choose policies for evaluation. Empirical results in the mountain car and server job scheduling domains demonstrate that these techniques can substantially improve evolution's on-line performance in stochastic domains.
View:
PDF, PS, HTML
Citation:
In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1577-84, July 2006.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu