Discovering Gated Recurrent Neural Network Architectures (2018)
Reinforcement Learning agent networks with memory are a key component in solving POMDP tasks. Gated recurrent networks such as those composed of Long Short-Term Memory (LSTM) nodes have recently been used to improve state of the art in many supervised sequential processing tasks such as speech recog- nition and machine translation. However, scaling them to deep memory tasks in reinforcement learning domain is challenging because of sparse and deceptive reward function. To address this challenge first, a new secondary optimization objective is introduced that maximizes the information (Info-max) stored in the LSTM network. Results indicate that when combined with neuroevolution, Info- max can discover powerful LSTM-based memory solutions that outperform tradi- tional RNNs. Next, for the supervised learning tasks, neuroevolution techniques are employed to design new LSTM architectures. Such architectural variations in- clude discovering new pathways between the recurrent layers as well as designing new gated recurrent nodes. This dissertation proposes evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to ex- plore new variations more effectively than other methods. The method discovers nodes with multiple recurrent paths and multiple memory cells, which lead to sig- nificant improvement in the standard language modeling benchmark task. The dissertation also shows how the search process can be speeded up by training an LSTM network to estimate performance of candidate structures, and by encourag- ing exploration of novel solutions. Thus, evolutionary design of complex neural network structures promises to improve performance of deep learning architec- tures beyond human ability to do so.
View:
PDF, HTML
Citation:
PhD Thesis, Department of Computer Science, The University of Texas at Austin.
Bibtex:

Aditya Rawal Ph.D. Alumni aditya [at] cs utexas edu