Using Sentence-Level LSTM Language Models for Script Inference (2016)
There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents. These systems operate on structured verb-argument events produced by an NLP pipeline. We compare these systems with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents.
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16), pp. 279--289, Berlin, Germany 2016.

Slides (PDF) Slides (PPT)
Raymond J. Mooney Faculty mooney [at] cs utexas edu
Karl Pichotta Ph.D. Student pichotta [at] cs utexas edu