Adam Klivans's Theory Papers

    Back to my homepage

  • The Power of Iterative Filtering for Supervised Learning with (Heavy) Contamination
    Adam Klivans, Kostas Stavropoulos, Kevin Tian, Arsen Vasilyan.
    Submitted, 2025.
    We prove-- via a new polynomial outlier removal algorithm-- the surprising fact that low-degree approximators imply not just agnostic learning but learning with contamination.

  • Testing Noise Assumptions of Learning Algorithms
    Surbhi Goel, Adam Klivans, Kostas Stavropoulos, Arsen Vasilyan.
    Submitted, 2025.
    We give the first efficient algorithms that can test if a training set satisfies the assumptions of a given noise model.

  • Learning the Sherrington-Kirkpatrick Model Even at Low Temperature
    Gautam Chandrasekaran, Adam Klivans.
    To Appear, STOC 2025.
    We give the first efficient algorithm for learning Ising models/MRFs with random edge weights that succeeds even in the low-temperature regime.

  • Learning Constant-Depth Circuits in Malicious Noise Models
    Adam Klivans, Kostas Stavropoulos, Arsen Vasilyan.
    To Appear, COLT 2025.
    We give the first algorithm for learning AC^0 with contamination and match the (best-possible) running time due to Linial, Mansour, and Nisan.

  • Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees
    Gautam Chandrasekaran, Adam Klivans, Lin Lin Lee, Kostas Stavropoulos.
    ICLR, 2025.
    We efficiently learn neural nets by importing classical kernel methods into the TDS framework.

  • Efficient Discrepancy Testing for Learning with Distribution Shift
    Gautam Chandrasekaran, Adam Klivans, Vasilis Kontonis, Kostas Stavropoulos, Arsen Vasilyan.
    NeurIPS, 2024.
    We give efficient algorithms for computing localized discrepancy distance, unifying and improving all prior work on TDS learning.

  • Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension
    Gautam Chandrasekaran, Adam Klivans, Vasilis Kontonis, Raghu Meka, Kostas Stavropoulos.
    COLT, 2024. (Best Paper Award)
    We relax the notion of optimality in supervised learning to avoid worst-case hardness results.

  • Learning Intersections of Halfspaces with Distribution Shift: Improved Algorithms and SQ Lower Bounds
    Adam Klivans, Kostas Stavropoulos, Arsen Vasilyan.
    COLT, 2024.
    We give algorithms for learning intersections of halfspaces with respect to distribution shift that nearly match results for ordinary (PAC) learning and give the first SQ lower bounds for TDS learning.

  • Testable Learning with Distribution Shift
    Adam Klivans, Kostas Stavropoulos, Arsen Vasilyan.
    COLT, 2024.
    We define a new model--TDS Learning-- and give the first efficient algorithms for learning with distribution shift where no assumptions are made on the test distribution.

  • Learning Mixtures of Gaussians using the DDPM Objective
    Kulin Shah, Sitan Chen, Adam Klivans.
    NeurIPS, 2023.
    We show that gradient descent on the popular DDPM objective (score estimation) used in diffusion models efficiently learns mixtures of Gaussians.

  • Agnostically Learning Single-Index Models using Omnipredictors
    Aravind Gollakota, Parikshit Gopalan, Adam Klivans, Kostas Stavropoulos.
    NeurIPS, 2023.
    We give the first efficient algorithm for agnostically learning Single-Index Models (SIMs) with arbitrary monotone and Lipschitz activations.

  • Tester-Learners for Halfspaces: Universal Algorithms
    Aravind Gollakota, Adam Klivans, Kostas Stavropoulos, Arsen Vasilyan.
    NeurIPS, 2023 (oral).
    We give a fully polynomial-time tester-learner for halfspaces in the Massart and agnostic noise settings that works simultaneously for log-concave target densities.

  • An Efficient Tester-Learner for Halfspaces
    Aravind Gollakota, Adam Klivans, Kostas Stavropoulos, Arsen Vasilyan.
    ICLR 2024.
    We give a fully polynomial-time tester-learner for halfspaces in the Massart and agnostic noise settings that obtains the best possible approximation guarantees with respect to a target Gaussian marginal.

  • A Moment-Matching Approach to Testable Learning and a New Characterization of Rademacher Complexity
    Aravind Gollakota, Adam Klivans, Pravesh Kothari.
    STOC 2023 (selected as a top paper for a special issue of SICOMP).
    We develop the foundations of testable learning algorithms and give the first model-based characterization of Rademacher complexity.

  • Learning Narrow One-Hidden-Layer ReLU Networks
    Sitan Chen, Zehao Dou, Surbhi Goel, Adam Klivans, Raghu Meka.
    COLT 2023.
    We give the first polynomial-time algorithm for learning a one-hidden-layer ReLU network with a constant number of hidden units with respect to Gaussian marginals.

  • Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
    Sitan Chen, Aravind Gollakota, Adam Klivans, Raghu Meka.
    NeurIPS 2022 (oral).
    We give the first full SQ lower bound for learning ReLU networks with respect to Gaussian distributions in the standard (noise-free) model.

  • Learning Deep ReLU Networks is Fixed-Parameter Tractable
    Sitan Chen, Adam Klivans, Raghu Meka.
    FOCS 2021.
    We give the first nontrivial results for provably learning ReLU networks (Gaussian inputs) of depth more than two; these results cannot be obtained by gradient descent.

  • Efficiently Learning Any One Hidden Layer ReLU Network From Queries
    Sitan Chen, Adam Klivans, Raghu Meka.
    NeurIPS 2021.
    We give the first polynomial-time algorithm for learning arbitrary one hidden layer neural networks activations provided black-box access to the network.

  • Tight Hardness Results for Training Depth-Two ReLU Networks
    Surbhi Goel, Adam Klivans, Pasin Manurangsi, Daniel Reichman
    ITCS 2021.
    The title says it all.

  • From Boltzmann Machines to Neural Networks and Back Again
    Surbhi Goel, Frederic Koehler, Adam Klivans
    NeurIPS, 2020.
    We give provably efficient algorithms for learning Boltzmann machines using techniques from neural networks.

  • Statistical-Query Lower Bounds via Functional Gradients
    Surbhi Goel, Aravind Gollakota, Adam Klivans
    NeurIPS, 2020.
    We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e.g., halfspaces, ReLUs, sigmoids).

  • Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent
    Surbhi Goel, Aravind Gollakota, Zhihan Jin, Sushrut Karmalkar, Adam Klivans
    ICML, 2020.
    We prove, unconditionally, that gradient descent run on any classifier cannot learn one-layer neural networks even with respect to Gaussian marginals in polynomial-time.

  • Approximation Schemes for ReLU Regression
    Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam Klivans, Mahdi Soltanolkotabi
    COLT, 2020.
    We give the first efficient, constant-factor approximation algorithm for ReLU regression under weak assumptions on the marginal distribution.

  • Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals
    Surbhi Goel, Sushrut Karmalkar, Adam Klivans
    NeurIPS, 2019 (Spotlight).
    We show that finding the best fitting ReLU with respect to square-loss, even when the marginal is Gaussian, is computationally hard, and we give the first efficient approximation algorithm.

  • List-Decodable Linear Regression
    Sushrut Karmalkar, Adam Klivans, Pravesh Kothari
    NeurIPS, 2019 (Spotlight).
    We give the first efficient algorithm for robust regression in the list-decodable setting where an adversary can corrupt a greater than 1/2 fraction of examples.

  • Learning Ising Models with Independent Failures
    Surbhi Goel, Daniel Kane, Adam Klivans
    COLT, 2019.
    We give the first efficient algorithm for learning Ising models when each entry is corrupted with some fixed probability.
  • Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
    Surbhi Goel, Adam Klivans
    COLT, 2019.
    We give the first assumption-free, provably efficient algorithm for learning neural networks with two nonlinear layers.

  • Preserving Randomness for Adaptive Algorithms
    William Hoza, Adam Klivans
    In the proceedings of RANDOM, 2018.
    For any randomized estimation algorithm that uses n random bits, we show how to run the algorithm on k adaptively chosen inputs using n + O(k) random bits, an essentially optimal bound.

  • Efficient Algorithms for Outlier-Robust Regression
    Adam Klivans, Pravesh Kothari, Raghu Meka
    In the proceedings of COLT, 2018.
    We give the first efficient algorithm for performing least-squares regression resilient to adversarial corruptions in both examples and labels.

  • Learning One Convolutional Layer with Overlapping Patches
    Surbhi Goel, Adam Klivans, Raghu Meka
    In the proceedings of ICML, 2018 (full oral).
    We give the first provably efficient algorithm for learning a one layer CNN with respect to a broad class of overlapping patch structures.

  • Hyperparameter Optimzation: A Spectral Approach
    Elad Hazan, Adam Klivans, Yang Yuan
    In the proceedings of ICLR, 2018.
    Selected oral presentation, NIPS workshop on Deep Learning: Theory and Practice, 2017.
    We use techniques from the analysis of Boolean functions to give improved algorithms for hyperparameter optimization.

  • Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks
    Surbhi Goel, Adam Klivans
    In NIPS, 2017.
    We give the first assumption on only the marginal distribution that implies efficient learnability for deep networks.

  • Learning Graphical Models Using Multiplicative Weights
    Adam Klivans, Raghu Meka
    In FOCS 2017.
    We give an algorithm, Sparsitron, that learns all undirected graphical models (finite alphabet) with essentially optimal run-time and sample complexity.

  • Reliably Learning the ReLU in Polynomial Time
    Surbhi Goel, Varun Kanade, Adam Klivans, Justin Thaler
    In COLT, 2017.
    Selected Oral Presentation, NIPS 2016 Workshop on Optimization in ML (OptML).

  • Exact MAP Inference by Avoiding Fractional Vertices
    Erik Lindgren, Alex Dimakis, Adam Klivans
    In ICML, 2017.

  • 2014, 2015: On Leave.

  • Sparse Polynomial Learning and Graph Sketching.
    Alex Dimakis, Adam R. Klivans, Murat Kocaoglu, and Karthikeyan Shanmugam.
    In NIPS, 2014 (oral presentation).

  • Embeddings Hard Learning Problems into Gaussian Space.
    Adam R. Klivans and Pravesh Kothari.
    In the Proceedings of RANDOM, 2014.

  • Learning Halfspaces under Log-Concave Densities: Polynomial Approximations and Moment-Matching
    Daniel Kane, Adam R. Klivans, and Raghu Meka.
    In the Proceedings of the 26th Conference on Learning Theory (COLT), 2013.

  • Constructing Hard Functions from Learning Algorithms.
    Adam R. Klivans, Igor C. Oliveira, and Pravesh Kothari.
    In the Proceedings of the 28th Annual Conference on Computational Complexity (CCC), 2013.

  • Moment-Matching Polynomials
    Adam R. Klivans, Raghu Meka.
    Manuscript, 2013.

  • An Explicit VC-Theorem for Low-Degree Polynomials
    Eshan Chattopadhyay, Adam R. Klivans, Pravesh Kothari.
    In the Proceedings of RANDOM, 2012.

  • Learning Functions of Halfspaces Using Prefix Covers
    Parikshit Gopalan, Adam R. Klivans, Raghu Meka.
    In the Proceedings of the 25th Conference on Learning Theory (COLT), 2012.

  • Submodular Functions are Noise Stable
    Mahdi Cheraghchi, Adam R. Klivans, Pravesh Kothari, Homin Lee.
    In the Proceedings of the 23rd ACM Symposium on Discrete Algorithms (SODA), 2012.

  • An FPTAS for #Knapsack and Related Counting Problems
    Parikshit Gopalan, Adam R. Klivans, Raghu Meka, Daniel Stefankovic, Santosh Vempala, Eric Vigoda.
    In the Proceedings of the 52nd Foundations of Computer Science (FOCS), 2011.

  • Polynomial-Time Approximation Schemes for Knapsack and Related Counting Problems Using Branching Programs
    Parikshit Gopalan, Adam R. Klivans, Raghu Meka.
    Manuscript, 2011.

  • Mansour's Conjecture is True for Random DNF Formulas
    Adam R. Klivans, Homin Lee, Andrew Wan.
    In the Proceedings of the 23rd Conference on Learning Theory (COLT), 2010.

  • An Invariance Principle for Polytopes
    Prahladh Harsha, Adam R. Klivans, Raghu Meka.
    In the Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC), 2010.
    To Appear in the Journal of the ACM.

  • Bounding the Sensitivity of Polynomial Threshold Functions
    Prahladh Harsha, Adam R. Klivans, Raghu Meka.
    Invited to appear in a special issue of Theory of Computing.
    In the Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC), 2010.
    (Conference version to be merged with this paper by Diakonikolas, Raghavendra, Servedio, and Tan)

  • Baum's Algorithm Learns Intersections of Halfspaces with respect to Log-Concave Distributions
    Adam R. Klivans, Philip M. Long, Alex Tang.
    In the Proceedings of RANDOM 2009.

  • Learning Halfspaces with Malicious Noise
    Adam R. Klivans, Philip M. Long, Rocco A. Servedio.
    In the Proceedings of ICALP 2009.

  • Learning Geometric Concepts via Gaussian Surface Area
    (In this paper we give a subexponential-time algorithm for learning all convex sets with respect to Gaussian distributions.)
    Adam R. Klivans, Ryan O'Donnell, Rocco Servedio.
    In the Proceedings of the 49th Foundations of Computer Science (FOCS), 2008.

  • A Query Algorithm for Agnostically Learning DNF?, a 2-page open problem.
    Parikshit Gopalan, Adam T. Kalai, Adam R. Klivans
    In the Proceedings of the 21st Conference on Learning Theory (COLT), 2008.

  • Agnostically Learning Decision Trees
    Parikshit Gopalan, Adam T. Kalai, Adam R. Klivans.
    In the Proceedings of the 40th ACM Symposium on Theory of Computing (STOC), 2008.

  • List-Decoding Reed Muller Codes over Small Fields
    Parikshit Gopalan, Adam R. Klivans, David Zuckerman.
    In the Proceedings of the 40th ACM Symposium on Theory of Computing (STOC), 2008.

  • A Lower Bound for Agnostically Learning Disjunctions
    Adam R. Klivans, Alexander A. Sherstov.
    In the Proceedings of the 20th Conference on Learning Theory (COLT), 2007.

  • Cryptographic Hardness for Learning Intersections of Halfspaces
    Adam R. Klivans, Alexander A. Sherstov.
    In the Proceedings of the 47th Foundations of Computer Science (FOCS), 2006.
    Invited to appear in a special issue of the Journal of Computer and System Sciences.

  • Unconditional Lower Bounds for Learning Intersections of Halfspaces
    Adam R. Klivans, Alexander A. Sherstov.
    In the Proceedings of the 19th Conference on Learning Theory (COLT), 2006.
    Invited to appear in a special issue of Machine Learning Journal.

  • Efficient Learning Algorithms Yield Circuit Lower Bounds
    Lance Fortnow, Adam R. Klivans.
    In the Proceedings of the 19th Conference on Learning Theory (COLT), 2006.
    Invited to appear in a special issue of the Journal of Computer and System Sciences.

  • Linear Advice for Randomized Logarithmic Space
    Lance Fortnow, Adam R. Klivans.
    In the Proceedings of the 23rd International Symposium on Theoretical Aspects of Computer Science (STACS), 2006.

  • Agnostically Learning Halfspaces
    Adam Kalai, Adam R. Klivans, Yishay Mansour, Rocco Servedio
    In the Proceedings of the 46th Foundations of Computer Science (FOCS), 2005.
    Invited to appear in a special issue of SICOMP.

  • NP with Small Advice
    Lance Fortnow, Adam R. Klivans.
    To Appear in the Proceedings of the 20th Annual Conference on Computational Complexity (CCC), 2005.

  • Learnability and Automatizability
    Misha Alekhnovich, Mark Braverman, Vitaly Feldman, Adam Klivans, Toniann Pitassi.
    Proceedings of the 45th Foundations of Computer Science (FOCS), 2004.
    Invited to appear in a special issue of the Journal of Computer and System Sciences.

  • Perceptron-Like Performance for Learning Intersections of Halfspaces, a 2-page open problem.
    Adam R. Klivans, Rocco Servedio.
    Proceedings of the 17th Annual Conference on Learning Theory (COLT), 2004.

  • Learning Intersections of Halfspaces with a Margin
    Adam R. Klivans, Rocco Servedio.
    Proceedings of the 17th Annual Conference on Learning Theory (COLT), 2004.
    Invited to appear in a a special issue of the Journal of Computer and System Sciences.

  • Toward Attribute Efficient Learning Algorithms
    Adam R. Klivans, Rocco Servedio.
    Proceedings of the 17th Annual Conference on Learning Theory (COLT), 2004.

  • Learning Arithmetic Circuits
    Adam R. Klivans, Amir Shpilka.
    Proceedings of the 16th Annual Conference on Learning Theory (COLT), 2003.

  • Learning Intersections and Thresholds of Halfspaces
    Adam R. Klivans, Ryan O'Donnell, Rocco Servedio.
    Proceedings of the 43rd Foundations of Computer Science (FOCS), 2002.
    Invited to appear in a special issue of the Journal of Computer and System Sciences.

  • A Complexity-Theoretic Approach to Learning
    Adam R. Klivans
    Ph.D. Thesis, MIT, 2002. [Abstract]

  • Learnability Beyond AC^0
    Jeff Jackson, Adam R. Klivans, Rocco Servedio.
    Proceedings of the 34th Symposium on Theory of Computing (STOC) and the 17th Conference on Computational Complexity (CCC), 2002.

  • On the Derandomization of Constant Depth Circuits
    Adam R. Klivans
    Proceedings of the 5th International Workshop on Randomization and Approximation Techniques in Computer Science (RANDOM), 2001.

  • Learning DNF in Time $2^{O(n^{1/3})}$
    Adam R. Klivans, Rocco A. Servedio.
    Proceedings of the 33rd Symposium on Theory of Computing (STOC), 2001.
    Winner, Best Student Paper.
    Invited to appear in a special issue of the Journal of Computer and System Sciences.

  • Randomness Efficient Identity Testing
    Adam R. Klivans, Daniel A. Spielman.
    Proceedings of the 33rd Symposium on Theory of Computing (STOC), 2001.

  • Boosting and Hard-Core Sets
    Adam R. Klivans, Rocco A. Servedio.
    Proceedings of 40th Foundations of Computer Science (FOCS), 1999.
    Invited to appear in a special issue of Machine Learning Journal.

  • Graph NonIsomorphism has Subexponential Size Proofs unless the Polynomial-Time Hierarchy Collapses
    Adam R. Klivans, Dieter van Melkebeek.
    Proceedings of 31st Symposium on the Theory of Computing (STOC), 1999.
    In SIAM Journal on Computing 2002. Journal Version.

  • Factoring Polynomials Modulo Composites
    Adam R. Klivans.
    Master's Thesis, CMU CS Technical Report CMU-CS-97-136.

    Some research supported by an NSF Mathematical Sciences Postdoctoral Fellowship