Text to Animation Reading Group

The goal of the Text to Animation project is to connect natural language processing and graphics. The current focus of the project is on generating human motion animations from natural language descriptions. As a result, we read papers in the following topic areas: sequence-to-sequence modeling, motion modeling, text-to-pixels, and motion controller learning for animated characters.

Spring 2018 Meeting Time & Place

  • Biweekly meetings on Mondays, 1 p.m. - 2 p.m. in GDC 3.816

Scheduled Meetings

02/19/18 1PM GDC 3.816 Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, and Hao Li
Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis, arXiv, 2018

* Future meetings subject to rearrangement.

* Please send in suggestions for new papers to discuss, or vote for a paper from the currently proposed papers.

Past Meetings

02/05/18 1PM GDC 3.816 Junyoung Chung, Sungjin Ahn, and Yoshua Bengio
Hierarchical Multiscale Recurrent Neural Networks, ICLR, 2017
01/22/18 1PM GDC 3.816 Xue Bin Peng, Glen Berseth, Kangkang Yin, and Michiel van de Panne
DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning, ACM Transactions on Graphics, 2017
Project website (Includes videos)
01/10/18 11AM GDC 5.816 Julieta Martinez, Michael J. Black, and Javier Romero
On human motion prediction using recurrent neural networks, CVPR, 2017
12/11/17 12PM GDC 3.816 Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe Yearsley, and Taku Komura
A Recurrent Variational Autoencoder for Human Motion Synthesis, British Machine Vision Conference, 2017
12/04/17 2PM GDC 5.816 Daniel Holden, Jun Saito, and Taku Komura
A Deep Learning Framework for Character Motion Synthesis and Editing, SIGGRAPH, 2016
11/13/17 2PM GDC 5.816 Yang Li, Nan Du, and Sammy Bengio
Time-Dependent Representation for Neural Event Sequence Prediction, arXiv
10/30/17 2PM GDC 5.816 Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess
Learning Human Behaviors from Motion Capture by Adversarial Imitation, arXiv
10/16/17 2PM GDC 5.816 Han Zhang, Tao Xu, Hongshen Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas
StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, ICCV, 2017
10/02/17 2PM GDC 5.816 Prajit Ramachandran, Peter J. Liu, and Quoc V. Le
Unsupervised Pretraining for Sequence to Sequence Learning, ACL, 2017

Proposed Papers

TopicSuggested Papers
Sequence-to-Sequence Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, and Songhwai Oh
Text2Action: Generative Adversarial Synthesis from Language to Action, arXiv, 2017
Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark Hasegawa-Johnson, and Thomas S. Huang
Dilated Recurrent Neural Networks, NIPS, 2017
Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee
Learning to Generate Long-term Future via Hierarchical Prediction, ICML, 2017
Motion Modeling Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges
Learning Human Motion Models for Long-term Predictions, International Conference on 3D Vision, 2017 (Received Best Paper Award)
Project website (Includes videos)
Emad Barsoum, John Kender, and Zicheng Liu
HP-GAN: Probabilistic 3D human motion prediction via GAN, arXiv, 2017
Judith Bütepage, Michael Black, Danica Kragic, and Hedvig Kjellström
Deep representation learning for human motion prediction and classification, CVPR, 2017
Motion Controller Taesoo Kwon and Jessica K. Hodgins
Momentum-Mapped Inverted Pendulum Models for Controlling Dynamic Human Motions, ACM Transactions on Graphics, 2017

If you find a certain paper interesting and would like to recommend reading, please feel free to let us know during the meeting or e-mail Angela Lin.

Subscribe to Email List