My research area is natural language processing. I build machine learning models and datasets to help computers understand human language. Recently, I have focused on extracting and querying information from unstructured text and knowledgebases. Before UT-Austin, I spent a year at Google AI in NYC as a visiting faculty researcher. Prior to that, I was a Ph.D. student at UW, advised by Luke Zettlemoyer and Yejin Choi.
Here are few broad areas that I am interested in at the moment. For more, please look at my publication page.
Question answering: How can we build efficient and robust methods for question answering? How can we exploit diverse sources of information? Can we provide faithful and/or useful rationale for model's predictions? How should we present answers?
Language understanding in context: How can we understand text in a rich conversational, social, temporal, geographical, multimodal context?
Multilinguality: Can we bring advances in English NLP to other languages? How can we efficiently build datasets and models to enable such transfers?
News
(08/20/20) I'm excited to start my job as an assistant professor at UT Austin! If you are interested in working with me, please look at [this page].
(06/18/20) We are organizing a NeurIPS competition on efficient question answering [EfficientQA]. Please consider submitting your system!
Personal
Korean version: [] Easier version: []
My name (은솔) means soft, persistent love in ancient Korean (or at least my father claims so).
I started my NLP journey at Cornell as an undergraduate with Lillian Lee.