Function Approximation via Tile Coding: Automating Parameter Choice (2005)
Alexander A. Sherstov and Peter Stone
Reinforcement learning (RL) is a powerful abstraction of sequential decision making that has an established theoretical foundation and has proven effective in a variety of small, simulated domains. The success of RL on real-world problems with large, often continuous state and action spaces hinges on effective emphfunction approximation. Of the many function approximation schemes proposed, emphtile coding strikes an empirically successful balance among representational power, computational cost, and ease of use and has been widely adopted in recent RL work. This paper demonstrates that the performance of tile coding is quite sensitive to parameterization. We present detailed experiments that isolate the effects of parameter choices and provide guidance to their setting. We further illustrate that emphno single parameterization achieves the best performance throughout the learning curve, and contribute an emphautomated technique for adjusting tile-coding parameters online. Our experimental findings confirm the superiority of adaptive parameterization to fixed settings. This work aims to automate the choice of approximation scheme not only on a problem basis but also throughout the learning process, eliminating the need for a substantial tuning effort.
In SARA 2005, J.-D. Zucker and I. Saitta (Eds.), Vol. 3607, pp. 194-205, Berlin 2005. Springer Verlag.

Peter Stone Faculty pstone [at] cs utexas edu