Bayesian Models of Nonstationary Markov Decision Problems (2005)
Standard reinforcement learning algorithms gener- ate polices that optimize expected future rewards in a priori unknown domains, but they assume that the domain does not change over time. Prior work cast the reinforcement learning problem as a Bayesian estimation problem, using experience data to condition a probability distribution over domains. In this paper we propose an elaboration of the typical Bayesian model that accounts for the possibility that some aspect of the domain changes spontaneously during learning. We develop a reinforcement learning algorithm based on this model that we expect to react more intelligently to sudden changes in the behavior of the environment.
In IJCAI 2005 workshop on Planning and Learning in A Priori Unknown or Dynamic Domains, August 2005.

Nicholas Jong Ph.D. Alumni nickjong [at] me com
Peter Stone Faculty pstone [at] cs utexas edu