Value Function Transfer for General Game Playing (2006)
We present value function transfer techniques for General Game Playing (GGP) by Reinforcement Learning. We focus on 2 player, alternate-move, complete information board games and use the GGP simulator and framework. Our approach is two-pronged: first we extract knowledge about crucial regions in the value-function space of any game in the genre. Then for each target game, we generate a smaller version of this game and extract symmetry information from the board setup. The combined knowledge of value function and symmetry allows us to achieve significant transfer via Reinforcement Learning, to larger board games using only a limited size of state-space by virtue of exploiting symmetry.
View:
PDF, PS, HTML
Citation:
In ICML workshop on Structural Knowledge Transfer for Machine Learning, June 2006.
Bibtex:

Bikramjit Banerjee Postdoctoral Alumni bikramjitbanerjee [at] yahoo com
Gregory Kuhlmann Ph.D. Alumni kuhlmann [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu