• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Function Approximation via Tile Coding: Automating Parameter Choice.
Alexander
A. Sherstov and Peter Stone.
In J.-D. Zucker and I. Saitta,
editors, SARA 2005, Lecture Notes in Artificial Intelligence, pp. 194–205, Springer Verlag, Berlin, 2005.
SARA-05.
Official version from Publisher's
Webpage© Springer-Verlag
[PDF]187.7kB [postscript]583.4kB [slides.pdf]193.7kB
Reinforcement learning (RL) is a powerful abstraction of sequential decision making that has an established theoretical foundation and has proven effective in a variety of small, simulated domains. The success of RL on real-world problems with large, often continuous state and action spaces hinges on effective function approximation. Of the many function approximation schemes proposed, tile coding strikes an empirically successful balance among representational power, computational cost, and ease of use and has been widely adopted in recent RL work. This paper demonstrates that the performance of tile coding is quite sensitive to parameterization. We present detailed experiments that isolate the effects of parameter choices and provide guidance to their setting. We further illustrate that no single parameterization achieves the best performance throughout the learning curve, and contribute an automated technique for adjusting tile-coding parameters online. Our experimental findings confirm the superiority of adaptive parameterization to fixed settings. This work aims to automate the choice of approximation scheme not only on a problem basis but also throughout the learning process, eliminating the need for a substantial tuning effort.
@incollection(SARA05,
author="Alexander A.\ Sherstov and Peter Stone",
title="Function Approximation via Tile Coding: Automating Parameter Choice",
booktitle="SARA 2005",
series="Lecture Notes in Artificial Intelligence",
editor="J.-D.\ Zucker and I.\ Saitta",
volume="3607",
Publisher="Springer Verlag",address="Berlin",year="2005",
pages="194--205",
abstract={
Reinforcement learning (RL) is a powerful
abstraction of sequential decision making that has
an established theoretical foundation and has proven
effective in a variety of small, simulated domains.
The success of RL on real-world problems with large,
often continuous state and action spaces hinges on
effective \emph{function approximation.} Of the many
function approximation schemes proposed, \emph{tile
coding} strikes an empirically successful balance
among representational power, computational cost,
and ease of use and has been widely adopted in
recent RL work. This paper demonstrates that the
performance of tile coding is quite sensitive to
parameterization. We present detailed experiments
that isolate the effects of parameter choices and
provide guidance to their setting. We further
illustrate that \emph{no single parameterization}
achieves the best performance throughout the
learning curve, and contribute an \emph{automated}
technique for adjusting tile-coding parameters
online. Our experimental findings confirm the
superiority of adaptive parameterization to fixed
settings. This work aims to automate the choice of
approximation scheme not only on a problem basis but
also throughout the learning process, eliminating
the need for a substantial tuning effort.
},
wwwnote={<a href="http://sara2005.limbio-paris13.org/">SARA-05</a>.<br>
Official version from <a href="http://dx.doi.org/10.1007/11527862_14">Publisher's Webpage</a>© Springer-Verlag},
)
Generated by bib2html.pl (written by Patrick Riley ) on Thu Oct 23, 2025 16:14:14