I am a PhD candidate in the Department of Computer Science at the University of Texas at Austin. I am fortunate to be advised by Prof. Scott Niekum as part of the Personal Autonomous Robotics Lab (PeARL). My interests lie at the intersection of Learning from Demonstration and Human-Robot Interaction. Specifically, my research models intentions of human demonstrators using additional signals such as eye gaze and audio, for improved robot learning from demonstration.

Before coming to UT Austin, I worked for a short while as a Research Associate at the Robotics Institute, Carnegie Mellon University (CMU). Prior to that, I completed my MS in Robotics from CMU under the guidance of Prof. Aisling Kelliher, Prof. Thanassis Rikakis and Prof. Kris Kitani. During my time at CMU, along with a team of computer vision, medical and design experts, I worked on developing a prototype for a home-based stroke rehabilitation system. Before pursuing graduate studies, I completed my undergraduate degree in Computer Science and Engineering from the Indian Institute of Technology, Jodhpur in India.

Recent News

  • March 2021: It was HRI Week! I was invited to present my work for a PhD spotlight talk as part of the Workshop on Solutions for Socially Intelligent HRI in Real-World Scenarios. I also gave a talk on our work using human audio for learning from demonstration at the Workshop on Sound in HRI. We also successfully held the HRI Pioneers workshop, which I helped co-organize as Program Chair.
  • March 2021: I was invited to share my journey and experiences as a PhD student at a panel for the UT undergraduate Women in Computer Science (WiCS) Research Hackathon.
  • January 2021: I've been accepted to Google's CS Research Mentorship Program (CSRMP) Class of 2021.
  • December 2020: Our paper on Efficiently Guiding Imitation Learning Agents with Human Gaze has been accepted to AAMAS 2021.
  • October 2020: I was recognized among the top 10% reviewers for NeurIPS 2020.