Research

My research is focussed on improving grounded language learning for robotics - that is, enabling robots to connect natural language with knowledge either stored as facts, or obtained via sensory perception, in order to meaningfully interact with humans in natural language. The work is under the umbrella of the Building-Wide Intelligence project at UT Austin.

Publications
Opportunistic Active Learning for Grounding Natural Language Descriptions

Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Justin Hart, Peter Stone, and Raymond J. Mooney.
In Proceedings of the 1st Annual Conference on Robot Learning (CoRL-17), Mountain View, California, November 2017.

[Abstract] [PDF] [Bibtex]

Active learning identifies data points from a pool of unlabeled examples whose labels, if made available, are most likely to improve the predictions of a supervised model. Most research on active learning assumes that an agent has access to the entire pool of unlabeled data and can ask for labels of any data points during an initial training phase. However, when incorporated in a larger task, an agent may only be able to query some subset of the unlabeled pool. An agent can also opportunistically query for labels that may be useful in the future, even if they are not immediately relevant. In this paper, we demonstrate that this type of opportunistic active learning can improve performance in grounding natural language descriptions of everyday objects---an important skill for home and office robots. We find, with a real robot in an object identification setting, that inquisitive behavior---asking users important questions about the meanings of words that may be off-topic for the current dialog---leads to identifying the correct object more often over time.
@article{thomason:corl17,
  title={Opportunistic Active Learning for Grounding Natural Language Descriptions},
  author={Jesse Thomason and Aishwarya Padmakumar and Jivko Sinapov and Justin Hart and Peter Stone and Raymond J. Mooney},
  booktitle={In Proceedings of the 1st Annual Conference on Robot Learning (CoRL-17)},
  month={November},
  editor={Sergey Levine and Vincent Vanhoucke and Ken Goldberg},
  address={Mountain View, California},
  publisher={PMLR},
  pages={67--76},
  pdf = {http://proceedings.mlr.press/v78/thomason17a/thomason17a.pdf},
  url="http://proceedings.mlr.press/v78/thomason17a.html",
  year={2017}
}

Integrated Learning of Dialog Strategies and Semantic Parsing

Aishwarya Padmakumar and Jesse Thomason and Raymond J. Mooney
In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pp. 547--557, Valencia, Spain, April 2017.

[Abstract] [PDF] [Bibtex] [Slides]

Natural language understanding and dialog management are two integral components of interactive dialog systems. Previous research has used machine learning techniques to individually optimize these components, with different forms of direct and indirect supervision. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Experimental results on a simulated task of robot instruction demonstrate that joint learning of both components improves dialog performance over learning either of these components alone.
@inproceedings{padmakumar:eacl17,
  title={Integrated Learning of Dialog Strategies and Semantic Parsing},
  author={Aishwarya Padmakumar and Jesse Thomason and Raymond J. Mooney},
  booktitle={Proceedings of the 15th Conference of the European Chapter of the Association
          for Computational Linguistics (EACL 2017)},
  month={April},
  address={Valencia, Spain},
  pages={547--557},
  url="http://www.cs.utexas.edu/users/ai-lab/pub-view.php?PubID=127615",
  year={2017}
}

Automated Linguistic Personalization of Targeted Marketing Messages Mining User-Generated Text on Social Media

Rishiraj Saha Roy, Aishwarya Padmakumar, Guna Prasad Jeganathan and Ponnurangam Kumaraguru
In proceedings of the 16th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 15) (Best Paper Award)

[Abstract] [PDF] [Bibtex]

Personalizing marketing messages for specific audience segments is vital for increasing user engagement with advertisements, but it becomes very resource-intensive when the marketer has to deal with multiple segments, products or campaigns. In this research, we take the first steps towards automating message personalization by algorithmically inserting adjectives and adverbs that have been found to evoke positive sentiment in specific audience segments, into basic versions of ad messages. First, we build language models representative of linguistic styles from user-generated textual content on social media for each segment. Next, we mine product-specific adjectives and adverbs from content associated with positive sentiment. Finally, we insert extracted words into the basic version using the language models to enrich the message for each target segment, after statistically checking in-context readability. Decreased cross-entropy values from the basic to the transformed messages show that we are able to approach the linguistic style of the target segments. Crowdsourced experiments verify that our personalized messages are almost indistinguishable from similar human compositions. Social network data processed for this research has been made publicly available for community use.
@inproceedings {roy:cicling15,
  title={Automated Linguistic Personalization of Targeted Marketing Messages Mining User-Generated Text
      on Social Media},
  author={Rishiraj Saha Roy and Aishwarya Padmakumar and Guna Prasad Jeganathan and Ponnurangam
      Kumaraguru},
  booktitle={Proceedings of the 16th Conference of International Conference on Intelligent Text Processing
      and Computational Linguistics (CICLing 15)},
  month={April},
  address={Cairo, Egypt},
  url="https://link.springer.com/chapter/10.1007%2F978-3-319-18117-2_16",
  year={2015}
}