Concurrent Layered Learning (2003)
Shimon Whiteson and Peter Stone
Hierarchies are powerful tools for decomposing complex control tasks into manageable subtasks. Several hierarchical approaches have been proposed for creating agents that can execute these tasks. Layered learning is such a hierarchical paradigm that relies on learning the various subtasks necessary for achieving the complete high-level goal. Layered learning prescribes training low-level behaviors (those closer to the environmental inputs) prior to high-level behaviors. In past implementations these lower-level behaviors were always frozen before advancing to the next layer. In this paper, we hypothesize that there are situations where layered learning would work better were the lower layers allowed to keep learning concurrently with the training of subsequent layers, an approach we call concurrent layered learning. We identify a situation where concurrent layered learning is beneficial and present detailed empirical results verifying our hypothesis. In particular, we use neuro-evolution to concurrently learn two layers of a layered learning approach to a simulated robotic soccer keepaway task. The main contribution of this paper is evidence that there exist situations where concurrent layered learning outperforms traditional layered learning. Thus, we establish that, when using layered learning, the concurrent training of layers can be an effective option.
In {AAMAS} 2003: {P}roceedings of the Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, Jeffrey S. Rosenschein and Tuomas Sandholm and Michael Wooldridge and Makoto Yokoo (Eds.), pp. 193-200, New York, NY, July 2003. ACM Press.

Peter Stone Faculty pstone [at] cs utexas edu