Splitting Steepest Descent for Growing Neural Architectures (2019)
Qiang Liu, Lemeng Wu and Dilin Wang
We develop a progressive training approach for neural networks which adaptively grows the network structure by splitting existing neurons to multiple off-springs. By leveraging a functional steepest descent idea, we derive a simple criterion for deciding the best subset of neurons to split and a splitting gradient for optimally updating the off-springs. Theoretically, our splitting strategy is a second-order functional steepest descent for escaping saddle points in an ∞-Wasserstein metric space, on which the standard parametric gradient descent is a first-order steepest descent. Our method provides a new computationally efficient approach for optimizing neural network structures, especially for learning lightweight neural architectures in resource-constrained settings.
View:
PDF
Citation:
Advances in Neural Information Processing Systems (2019), pp. 10655--10665.
Bibtex:

Qiang Liu Faculty
Lemeng Wu Ph.D. Student lmwu [at] cs utexas edu