Deterministic Implementations for Reproducibility in Deep Reinforcement Learning (2018)
Prabhat Nagarajan, Garrett Warnell, and Peter Stone
While deep reinforcement learning (DRL) has led to numerous successes in recent years, reproducing these successes can be extremely challenging. One reproducibility challenge particularly relevant to DRL is nondeterminism in the training process, which can substantially affect the results. Motivated by this challenge, we study the positive impacts of deterministic implementations in eliminating nondeterminism in training. To do so, we consider the particular case of the deep Q-learning algorithm, for which we produce a deterministic implementation by identifying and controlling all sources of nondeterminism in the training process. One by one, we then allow individual sources of nondeterminism to affect our otherwise deterministic implementation, and measure the impact of each source on the variance in performance. We find that individual sources of nondeterminism can substantially impact the performance of agent, illustrating the benefits of deterministic implementations. In addition, we also discuss the important role of deterministic implementations in achieving exact replicability of results.
View:
PDF, HTML
Citation:
In 2nd Reproducibility in Machine Learning Workshop at ICML 2018, Stockholm, Sweden, July 2018.
Bibtex:

Peter Stone Faculty pstone [at] cs utexas edu
Garrett Warnell Research Scientist warnellg [at] cs utexas edu