Text to Animation Reading Group

The goal of the Text to Animation project is to connect natural language processing and graphics. The current focus of the project is on generating human motion animations from natural language descriptions. As a result, we read papers in the following topic areas: sequence-to-sequence modeling, motion modeling, text-to-pixels, and motion controller learning for animated characters.

Spring 2018 Meeting Time & Place

  • Biweekly meetings on Mondays, 1 p.m. - 2 p.m. in GDC 3.816

Scheduled Meetings

Meetings will resume in Fall 2018.

* Future meetings subject to rearrangement.

* Please send in suggestions for new papers to discuss, or vote for a paper from the currently proposed papers.

Past Meetings

05/07/18 1PM GDC 3.816 Xue Bin Peng, Pieter Abbeel, Sergey Levine, Michiel van de Panne
DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills, ACM Transactions on Graphics, 2018
Project website (Includes videos)
04/23/18 1PM GDC 3.816 Daniel Holden, Taku Komura, and Jun Saito
Phase-Functioned Neural Networks for Character Control, ACM Transactions on Graphics (TOG), 2017
Project website (includes videos)
04/09/18 1PM GDC 3.816 Yitong Li, Martin Renqiang Min, Dinghan Shen, David Carlson, and Lawrence Carin
Video Generation from Text, Proc. American Association of Artificial Intelligence (AAAI), 2018
03/26/18 1PM GDC 3.816 Ziyu Wang*, Josh Merel*, Scott Reed, Greg Wayne, Nando de Freitas, Nicolas Heess (* Joint First Authors)
Robust Imitation of Diverse Behaviors, NIPS, 2017
Project website (Supplemental material includes videos)
03/05/18 1PM GDC 3.816 Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges
Learning Human Motion Models for Long-term Predictions, International Conference on 3D Vision, 2017 (Received Best Paper Award)
Project website (Includes videos)
02/19/18 1PM GDC 3.816 Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, and Hao Li
Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis, arXiv, 2018
02/05/18 1PM GDC 3.816 Junyoung Chung, Sungjin Ahn, and Yoshua Bengio
Hierarchical Multiscale Recurrent Neural Networks, ICLR, 2017
01/22/18 1PM GDC 3.816 Xue Bin Peng, Glen Berseth, Kangkang Yin, and Michiel van de Panne
DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning, ACM Transactions on Graphics, 2017
Project website (Includes videos)
01/10/18 11AM GDC 5.816 Julieta Martinez, Michael J. Black, and Javier Romero
On human motion prediction using recurrent neural networks, CVPR, 2017
12/11/17 12PM GDC 3.816 Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe Yearsley, and Taku Komura
A Recurrent Variational Autoencoder for Human Motion Synthesis, British Machine Vision Conference, 2017
12/04/17 2PM GDC 5.816 Daniel Holden, Jun Saito, and Taku Komura
A Deep Learning Framework for Character Motion Synthesis and Editing, SIGGRAPH, 2016
11/13/17 2PM GDC 5.816 Yang Li, Nan Du, and Sammy Bengio
Time-Dependent Representation for Neural Event Sequence Prediction, arXiv
10/30/17 2PM GDC 5.816 Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess
Learning Human Behaviors from Motion Capture by Adversarial Imitation, arXiv
10/16/17 2PM GDC 5.816 Han Zhang, Tao Xu, Hongshen Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas
StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, ICCV, 2017
10/02/17 2PM GDC 5.816 Prajit Ramachandran, Peter J. Liu, and Quoc V. Le
Unsupervised Pretraining for Sequence to Sequence Learning, ACL, 2017
09/18/17 2PM GDC 5.816 Cihan Halit and Tolga Capin
Multiscale motion saliency for keyframe extraction from motion capture sequences, Computer Animation and Virtual Worlds Journal, January 2011
08/29/17 2PM GDC 5.816 Matthias Plappert, Christian Mandery, and Tamim Asfour
Learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks, arXiv, 2017

Proposed Papers

TopicSuggested Papers
Sequence-to-Sequence Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, and Songhwai Oh
Text2Action: Generative Adversarial Synthesis from Language to Action, arXiv, 2017
Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark Hasegawa-Johnson, and Thomas S. Huang
Dilated Recurrent Neural Networks, NIPS, 2017
Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee
Learning to Generate Long-term Future via Hierarchical Prediction, ICML, 2017
Eli Shlizerman, Lucio Dery, Hayden Schoen, and Ira Kemelmacher-Shlizerman
Audio to Body Dynamics, To appear in CVPR, 2018 (Spotlight)
Project website
Motion Modeling Emad Barsoum, John Kender, and Zicheng Liu
HP-GAN: Probabilistic 3D human motion prediction via GAN, arXiv, 2017
Judith Bütepage, Michael J. Black, Danica Kragic, and Hedvig Kjellström
Deep representation learning for human motion prediction and classification, CVPR, 2017
Text-to-Pixels Yingwei Pan, Zhaofan Qiu, Ting Yao, Houqiang Li, and Tao Mei
To Create What You Tell: Generating Videos from Captions, Proc. ACM on Multimedia Conference, 2017
Kevin Chen, Christopher B. Choy, Manolis Savva, Angel Chang, Thomas Funkhouser, and Silvio Savarese
Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings, arXiV, 2018
Project website (Includes videos)
Tiago Ramalho, Tomáš Kociský, Frederic Besse, S. M. Ali Eslami, Gábor Melis, Fabio Viola, Phil Blunsom, and Karl Moritz Hermann
Encoding Spatial Relations from Natural Language, arXiV, 2018
Motion Controller Taesoo Kwon and Jessica K. Hodgins
Momentum-Mapped Inverted Pendulum Models for Controlling Dynamic Human Motions, ACM Transactions on Graphics, 2017

If you find a certain paper interesting and would like to recommend reading, please feel free to let us know during the meeting or e-mail Angela Lin.

Subscribe to Email List