I am a final-year PhD student in the department of Computer Science at the University of Texas at Austin, advised by Alex Dimakis . I am generally interested in security and privacy of machine learning models. Early in my PhD this meant working on provable verification methods for adversarial robustness and lipschitzness of neural networks, with side projects in contrastive learning. These days, I've shifted focus towards applications involving large scale generative models, with an emphasis on language models. Prior to my PhD I received a bachelor's degree in math and computer science from MIT and started a marketing automation startup.

I am currently on the job market and am actively seeking employment!


Publications

Inverse Problems Leveraging Pre-trained Contrastive Representations

Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis

Advances in Neural Information Processing Systems (NeurIPS) 2021.

Provable Lipschitz Certification for Generative Models

Matt Jordan, Alexandros G. Dimakis

International Conference on Machine Learning (ICML) 2021.

Quarantines as a Targeted Immunization Strategy

Jessica Hoffmann, Matt Jordan, Constantine Caramanis.

Preprint, arXiv:2008.08262.

Exactly Computing the Local Lipschitz Constant of ReLU Networks

Matt Jordan, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2020.

Provable Certificates for Adversarial Examples: Fitting a Ball in a Union of Polytopes

Matt Jordan, Justin Lewis, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2019.

Quantifying Perceptual Distortion of Adversarial Examples

Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis.

Preprint, arXiv:1902.08265.


Last Update: Jan 2021
HTML Template stolen from Chen Liu