I am fourth-year PhD student in the department of Computer Science at the University of Texas at Austin advised by Alex Dimakis . I am broadly interested in the theoretical underpinnings of machine learning with guarantees, focusing on adversarial robustness and certification. I am also interested in ReLU geometry and algorithmic fairness. Prior to this, I received my bachelors degree form MIT majoring in computer science and theoretical mathematics.


Publications

Provable Lipschitz Certification for Generative Models

Matt Jordan, Alexandros G. Dimakis

International Conference on Machine Learning (ICML) 2021.

Quarantines as a Targeted Immunization Strategy

Jessica Hoffmann, Matt Jordan, Constantine Caramanis.

Preprint, arXiv:2008.08262.

Exactly Computing the Local Lipschitz Constant of ReLU Networks

Matt Jordan, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2020.

Provable Certificates for Adversarial Examples: Fitting a Ball in a Union of Polytopes

Matt Jordan, Justin Lewis, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2019.

Quantifying Perceptual Distortion of Adversarial Examples

Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis.

Preprint, arXiv:1902.08265.


Last Update: July 2021
HTML Template stolen from Chen Liu