I am fifth-year PhD student in the department of Computer Science at the University of Texas at Austin advised by Alex Dimakis . I am broadly interested in the theoretical underpinnings of machine learning with guarantees, focusing on adversarial robustness and certification. I am also interested in ReLU geometry and algorithmic fairness. Prior to this, I received my bachelors degree form MIT majoring in computer science and theoretical mathematics.


Publications

Inverse Problems Leveraging Pre-trained Contrastive Representations

Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis

Advances in Neural Information Processing Systems (NeurIPS) 2021.

Provable Lipschitz Certification for Generative Models

Matt Jordan, Alexandros G. Dimakis

International Conference on Machine Learning (ICML) 2021.

Quarantines as a Targeted Immunization Strategy

Jessica Hoffmann, Matt Jordan, Constantine Caramanis.

Preprint, arXiv:2008.08262.

Exactly Computing the Local Lipschitz Constant of ReLU Networks

Matt Jordan, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2020.

Provable Certificates for Adversarial Examples: Fitting a Ball in a Union of Polytopes

Matt Jordan, Justin Lewis, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2019.

Quantifying Perceptual Distortion of Adversarial Examples

Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis.

Preprint, arXiv:1902.08265.


Last Update: Jan 2021
HTML Template stolen from Chen Liu