Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


General Game Learning using Knowledge Transfer

Bikramjit Banerjee and Peter Stone. General Game Learning using Knowledge Transfer. In The 20th International Joint Conference on Artificial Intelligence, pp. 672–677, January 2007.
IJCAI-07

Download

[PDF]118.1kB  [postscript]198.5kB  

Abstract

We present a reinforcement learning game player that can interact with a General Game Playing system and transfer knowledge learned in one game to expedite learning in many other games. We use the technique of value-function transfer where general features are extracted from the state space of a previous game and matched with the completely different state space of a new game. To capture the underlying similarity of vastly disparate state spaces arising from different games, we use a game-tree lookahead structure for features. We show that such feature-based value function transfer learns superior policies faster than a reinforcement learning agent that does not use knowledge transfer. Furthermore, knowledge transfer using lookahead features can capture opponent-specific value-functions, i.e. can exploit an opponent's weaknesses to learn faster than a reinforcement learner that uses lookahead with minimax (pessimistic) search against the same opponent.

BibTeX Entry

@InProceedings(IJCAI07-bikram,
        author="Bikramjit Banerjee and Peter Stone",
	title="General Game Learning using Knowledge Transfer",
	BookTitle="The 20th International Joint Conference on Artificial Intelligence",
	month="January",year="2007",
	pages="672--677",
	abstract=" 
                  We present a reinforcement learning game player that
                  can interact with a General Game Playing system and
                  transfer knowledge learned in one game to expedite
                  learning in many other games. We use the technique
                  of value-function transfer where general features
                  are extracted from the state space of a previous
                  game and matched with the completely different state
                  space of a new game. To capture the underlying
                  similarity of vastly disparate state spaces arising
                  from different games, we use a game-tree lookahead
                  structure for features. We show that such
                  feature-based value function transfer learns
                  superior policies faster than a reinforcement
                  learning agent that does not use knowledge
                  transfer. Furthermore, knowledge transfer using
                  lookahead features can capture opponent-specific
                  value-functions, i.e. can exploit an opponent's
                  weaknesses to learn faster than a reinforcement
                  learner that uses lookahead with minimax
                  (pessimistic) search against the same
                  opponent. 
	         ",
	wwwnote={<a href="http://www.ijcai-07.org/">IJCAI-07</a>},
)

Generated by bib2html.pl (written by Patrick Riley ) on Fri Aug 15, 2014 16:26:03