I am a fifth-year PhD student in the department of Computer Science at the University of Texas at Austin advised by Adam Klivans. My interests lie at the intersection of Theory and Machine Learning. I am specifically interested in understanding what guarantees we can give for learning deep neural networks.


Prior to this, I received my Bachelors degree from Indian Institute of Technology (IIT) Delhi majoring in Computer Science and Engineering. My bachelor thesis was advised by Parag Singla and Chetan Arora.

Publications

  • Approximation Schemes for ReLU Regression
    Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam Klivans and Mahdi Soltanolkotabi
    COLT 2020 (to appear)
    [arxiv]
  • Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent
    Surbhi Goel, Aravind Gollakota, Zhihan Jin, Sushrut Karmalkar and Adam Klivans
    ICML 2020 (to appear)
  • Efficiently Learning Adversarially Robust Halfspaces with Noise
    Omar Montasser, Surbhi Goel, Ilias Diakonikolas and Nathan Srebro
    ICML 2020 (to appear)
    [arxiv]
  • Learning Mixtures of Graphs from Epidemic Cascades
    Jessica Hoffmann, Soumya Basu, Surbhi Goel and Constantine Caramanis
    Short version: Graph Representation Learning Workshop at NeurIPS 2019
    Full version: ICML 2020 (to appear)
    [arxiv]
  • Learning Ising and Potts Models with Latent Variables
    Surbhi Goel
    AISTATS 2020 (to appear)
    [arxiv-initial-version]
  • Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals
    Surbhi Goel, Sushrut Karmalkar and Adam Klivans
    NeurIPS 2019 (Spotlight)
    [arxiv]
  • Learning Ising Models with Independent Failures
    Surbhi Goel, Daniel Kane and Adam Klivans
    COLT 2019
    [arxiv]
  • Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
    Surbhi Goel and Adam Klivans
    Short version: NeurIPS Deep Learning: Bridging Theory and Practice Workshop 2017
    Full version: COLT 2019
    [arxiv]
  • Learning One Convolutional Layer with Overlapping Patches
    Surbhi Goel, Adam Klivans and Raghu Meka
    ICML 2018 (Long talk)
    [arxiv]
  • Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks
    Surbhi Goel and Adam Klivans
    NeurIPS 2017
    [arxiv]
  • Reliably Learning the ReLU in Polynomial Time
    Surbhi Goel, Varun Kanade, Adam Klivans and Justin Thaler
    Short version: NeurIPS OPTML Workshop 2016 (Oral Presentation)
    Full version: COLT 2017
    [arxiv]

Preprints

  • Learning Two layer Networks with Multinomial Activation and High Thresholds
    Surbhi Goel and Rina Panigrahy
    [arxiv]
  • Quantifying Perceptual Distortion of Adversarial Examples
    Matt Jordan, Naren Manoj, Surbhi Goel and Alex Dimakis
    [arxiv]
  • Improved Learning of One-hidden-layer Convolutional Neural Networks with Overlaps
    Simon Du and Surbhi Goel
    [arxiv]

Awards

  • Rising Stars in ML (2019)
  • Rising Stars in EECS (2019)
  • J.P. Morgan PhD Fellowship (2019-2020)
  • Simons-Berkeley Research Fellowship for “Foundations of Deep Learning.” (2019)
  • University of Texas at Austin Graduate Continuing Bruton Fellowship (2018)
  • University of Texas at Austin Graduate School Summer Fellowship (2017)
  • University of Texas Professional Development Award for conference travel (2017-2018)
  • ICIM Stay Ahead Award for Undergraduate Thesis (2015)
  • Suresh Chandra Memorial Trust Award for Undergraduate Thesis (2015)
  • Aditya Birla Scholarship (2011-2015)
  • OPJEM Scholarship (2011-2012)
  • KVPY Fellowship (2010-2011)