From Nodes to Networks: Evolving Recurrent Neural Networks (2020)
Gated recurrent networks such as those composed of Long Short-Term Memory (LSTM) nodes have recently been used to improve state of the art in many sequential processing tasks such as speech recognition and machine translation. However, the basic structure of the LSTM node is essentially the same as when it was first conceived 25 years ago. Recently, evolutionary and reinforcement learning mechanisms have been employed to create new variations of this structure. This paper proposes a new method, evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to explore new variations more effectively than other methods. The method discovers nodes with multiple recurrent paths and multiple memory cells, which lead to significant improvement in the standard language modeling benchmark task. Remarkably, this node did not perform well in another task, music modeling, but it was possible to evolve a different node that did, demonstrating that the approach discovers customized structure for each task. The paper also shows how the search process can be speeded up by training an LSTM network to estimate performance of candidate structures, and by encouraging exploration of novel solutions. Thus, evolutionary design of complex neural network structures promises to improve performance of deep learning architectures beyond human ability to do so.
View:
PDF
Citation:
In Deep Neural Evolution: Deep Learning with Evolutionary Computation, H. Iba and N. Noman (Eds.), pp. 233-251 2020. Springer. (also arxiv:1803.04439).
Bibtex:

Risto Miikkulainen Faculty risto [at] cs utexas edu
Aditya Rawal Ph.D. Alumni aditya [at] cs utexas edu