publications| bio| research statement| cv |
professional activities | twitter | trishul


Email: swarat@cs.utexas.edu

Administrative support:
Megan Booth, meganb@cs.utexas.edu

Mailing address:
GDC 5.810
Computer Science Department,
The University of Texas at Austin
Austin, TX 78712.


The Trishul lab for Trustworthy Intelligent Systems is still growing! If you would like to do a Ph.D. in the lab, please apply to UTCS and mention my name in the application, or write to me if you have questions. If you would like to do a postdoc, please write to me with a summary of your background and objectives.


My lab, called Trishul, studies problems at the interface of programming languages, logic and formal methods, and machine learning. Through a combination of programming language abstractions, statistical learning, search, and automated mathematical reasoning, we hope to build a new class of intelligent systems that are reliable, secure, and transparent by construction and can perform complex tasks that are beyond the scope of contemporary AI. I am a member of UT Austin's Programming Languages and Formal Methods group, a core faculty member in UT's Machine Learning Laboratory, and an affiliate of Texas Robotics.

Here are the main research themes in Trishul.


Program synthesis is the problem of automatically writing programs that accomplish a given set of high-level goals. Such goals can take many forms, including noisy input-output data, behavioral constraints, reward functions, and visual or textual descriptions of the program's goals. Program synthesis cuts across most of our research. Some of our work uses program synthesis to automate everyday software engineering tasks. Other work seeks to automatically discover safe and interpretable policies for AI agents. Recently, we have also used program synthesis to automatically discover interpretable scientific hypotheses from data and prior knowledge.

One key challenge in program synthesis is that the synthesizer must search through a space of programs that is combinatorial and quickly explodes. Another is that the user-defined goals are often incomplete, and one needs to extrapolate them into the real goals that should direct search. We approach these challenges through a mix of symbolic and statistical (in particular, deep learning) techniques.


Automated mathematical reasoning, i.e., the design of algorithms for proving and disproving mathematical statements, is another central plank of our research. In some cases, such reasoning is an end in itself. In other cases, automated reasoning is a tool for establishing that a system is safe or inferring useful facts that can aid learning or program synthesis. In either case, the complexity of exploring spaces of proofs and counterexamples is a formidable barrier. As with program synthesis, we approach this challenge by complementing classical automated reasoning techniques with statistical methods (deep learning).


A running theme in our recent work is the use of neurosymbolic programs, obtained through the composition of neural networks and traditional code, as a model of learning-enabled systems. The neural modules in such a program facilitate efficient learning, while the symbolic components allow the program to use human domain knowledge and also be human-comprehensible. Our research studies a range of problems involving neurosymbolic programs, including the design of language abstractions that allow neural and symbolic modules to interoperate smoothly, methods for analyzing the safety and performance of neurosymbolic programs, and algorithms for learning the structure and parameters of neurosymbolic programs from data.


Probabilistic programming, in which programs are used to represent complex, structured probability distributions, is also an area of interest. We are especially interested in using such programs to unite logical and probabilistic reasoning and perform complex generative modeling, and for causal inference and discovery. Our research studies a variety of technical problems in probabilistic programming, including the design of probabilistic programming languages; the inference of probabilities, independence relationships, and the effects of interventions and counterfactuals; and deriving algorithms for learning the structure and parameters of probabilistic programs.

  • Greg Anderson (Ph.D. student; co-advised with Isil Dillig)
  • Sam Anklesaria (Ph.D. student)
  • Josh Hoffman (Ph.D. student; co-advised with Joydeep Biswas)
  • Atharva Sehgal (Ph.D. student)
  • Meghana Sistla (Ph.D. student)
  • Chenxi Yang (Ph.D. student)
  • Yeming Wen (Ph.D. student)
  • Dweep Trivedi (Ph.D. student)
  • Eric Hsiung (Ph.D. student; co-advised with Joydeep Biswas)
  • Amitayush Thakur (Ph.D. student)
  • Thomas Logan (Ph.D. student)
  • Christopher Hahn (Masters student)

  • Anders Miltner (Postdoc; 2020-2022) → Assistant Professor, Simon Fraser University
  • Dipak Chaudhari (Postdoc; 2017-2022) → Meta
  • Calvin Smith (Postdoc; 2020-2022)
  • Abhinav Verma (Ph.D.; 2016-2021) → Hartz Family Assistant Professor, Pennsylvania State University
  • Yanxin Lu (Ph.D.; 2012-2018) → Facebook
  • Yue Wang (Ph.D.; 2013-2018) → Facebook
  • Vijayaraghavan Murali (Postdoc, Research Scientist; 2015-2018) → Facebook
  • Neil Dantam (Postdoc; 2015-2017) → Assistant Professor, Colorado School of Mines
  • Srinivas Nedunuri (Postdoc; 2012-2014) → Sandia National Labs
  • Eddy Westbrook (Postdoc; 2011-2013) → Galois
  • Roberto Lublinerman (Ph.D.; 2008-2012) → Google


(Fall 2021, Fall 2022) CS 378: Safe and Ethical Artificial Intelligence

(Fall 2020) CS 378: Logic in Computer Science and Artificial Intelligence

(Spring 2020, Spring 2021, Spring 2022) CS 395T: Program Synthesis

(Spring 2019, Spring 2018) COMP 403/503: Reasoning about software

(Fall 2019, Fall 2018, Fall 2016, Fall 2015, Fall 2014, Spring 2014) COMP 382: Reasoning about algorithms

(Spring 2015, Fall 2013, Fall 2012) COMP 507: Computer-Aided Program Design