Continual Learning in Reinforcement Environments

by

MARK BISHOP RING, A.B., M.S.C.S

 
 
 

Dissertation

Presented to the Faculty of the Graduate School of
The University of Texas at Austin
in Partial Fulfillment
of the Requirements
for the Degree of

Doctor of Philosophy

 
 
 
 

THE UNIVERSITY OF TEXAS AT AUSTIN

August, 1994

Abstract

Continual learning is the constant development of complex behaviors with no final end in mind. It is the process of learning ever more complicated skills by building on those skills already developed. In order for learning at one stage of development to serve as the foundation for later learning, a continual-learning agent should learn hierarchically. CHILD, an agent capable of Continual, Hierarchical, Incremental Learning and Development is proposed, described, tested, and evaluated in this dissertation. CHILD accumulates useful behaviors in reinforcement environments by using the Temporal Transition Hierarchies learning algorithm, also derived in the dissertation. This constructive algorithm generates a hierarchical, higher-order neural network that can be used for predicting context-dependent temporal sequences and can learn sequential-task benchmarks more than two orders of magnitude faster than competing neural-network systems. Consequently, CHILD can quickly solve complicated non-Markovian reinforcement-learning tasks and can then transfer its skills to similar but even more complicated tasks, learning these faster still. This continual-learning approach is made possible by the unique properties of Temporal Transition Hierarchies, which allow existing skills to be amended and augmented in precisely the same way that they were constructed in the first place.
 
 

Available from Oldenbourg Verlag (Publishers): ISBN 3-486-23603-2.

The following are all compressed postscript files.

Contents:

Leading pages (pp. iv - xiv)

 

Chapters:

1.Introduction (pp. 1 - 7)
2.Robotics Environments and Learning Tasks (pp. 8 - 16).
3.Neural-Network Learning (pp. 17 - 24).
4.Solving Temporal Problems with Neural Networks (pp. 25 - 33).
5.Reinforcement Learning (pp. 34 - 44).
6.The Automatic Construction of Sensorimotor Hierarchies (pp. 45 - 71).
6.1 Behavior Hierarchies (pp. 45 - 52).
6.2 Temporal Transition Hierarchies (pp. 52 - 69).
6.3 Conclusions (pp. 70 - 71).
7.Simulations (pp. 72 - 95).
7.1 Description of Simulation System (p. 72 - 73).
7.2 Supervised-Learning Tasks (pp. 73 - 82).
7.3 Continual Learning Results (pp. 82 - 95).
8.Synopsis, Discussion, and Conclusions (pp. 96 - 107).

 
Appendices A-E (pp. 108 - 117).
Bibliography (pp. 118 - 127).
The dissertation is also available as a single pdf (138 pages, 624 kbytes).

Back to Mark Ring's home page.