Fall 2019 Theory Seminars will meet on Fridays from 2:30 -3:30 pm in GDC 4.304 unless otherwise noted. This schedule will be updated throughout the semester.

08/30/2019 - Hao Huang, Emory University: A Proof of the Sensitivity Conjecture
09/13/2019 - Kuan Cheng, UT: Deterministic Document Exchange Protocols and Error Correcting Codes for Edit Errors
09/20/2019 - Samuel Hopkins, UC Berkeley: How to Estimate the Mean of a (Heavy Tailed) Random Vector
09/27/2019 - Jiapeng Zhang, Harvard: An Improved Sunflower Bound *NOTE: Room changed to GDC 6.302.
10/04/2019 - Chris Umans, Caltech: Fast Generalized DFTs For All Finite Groups
10/11/2019 - Scott Aaronson, UT: Quantum Supremacy and Its Applications
10/18/2019 - Barna Saha, Berkeley: Efficient Fine-Grained Algorithms
10/25/2019 - Justin Holmgren, Simons Institute: Extreme Hardness and Cryptographic Hashing
11/01/2019 - Sam Kim, Stanford
11/08/2019 - Aaron Sidford, Stanford
11/15/2019 - Leonid Gurvits, City College of New York
11/22/2019 - Yury Makarychev, University of Chicago
12/06/2019 - Anurag Anshu, University of Waterloo

08/30/2019
Hao Huang, Emory University
A Proof of the Sensitivity Conjecture

In the n-dimensional hypercube graph, one can easily choose half of the vertices such that they induce an empty graph. However, having even just one more vertex would cause the induced subgraph to contain a vertex of degree at least \sqrt{n}. This result is best possible, and improves a logarithmic lower bound shown by Chung, Furedi, Graham and Seymour in 1988. In this talk we will discuss a very short algebraic proof of it.

As a direct corollary of this purely combinatorial result, the sensitivity and degree of every boolean function are polynomially related. This solves an outstanding foundational problem in theoretical computer science, the Sensitivity Conjecture of Nisan and Szegedy. 

09/13/2019
Kuan Cheng, UT 
Deterministic Document Exchange Protocols and Error Correcting Codes for Edit Errors

Recently we studied two basic problems about correcting edit errors. The first one is document exchange, where two parties Alice and Bob hold two strings x and y with a bounded edit distance k. The goal is to have Alice send a short sketch to Bob, so that Bob can recover x based on y and the sketch. The second one is the fundamental problem of designing error correcting codes for edit errors, where the goal is to construct an explicit code to transmit a message x through a channel that can add at most k worst case insertions and deletions, so that the original message x can be successfully recovered at the other end of the channel.  

Both problems have been extensively studied for decades, and I will mainly talk about deterministic document exchange protocols and binary codes for insertions/deletions [CJLW18] and for insertions/deletions with transpositions [CJLW19]. These constructions all have sketch/redundancy lengths some log factors from optimal, which significantly improved previous results.

09/20/2019
Samuel Hopkins, UC Berkeley
How to Estimate the Mean of a (Heavy Tailed) Random Vector

We study polynomial time algorithms for estimating the mean of a multivariate random vector under very mild assumptions: we assume only that the random vector X has finite mean and covariance. This allows for X to be heavy-tailed. In this setting, the radius of confidence intervals achieved by the empirical mean are exponentially larger in the case that X is Gaussian or sub-Gaussian. That is, the empirical mean is poorly concentrated.

We offer the first polynomial time algorithm to estimate the mean of X with sub-Gaussian-size confidence intervals under such mild assumptions. That is, our estimators are exponentially better-concentrated than the empirical mean. Our algorithm is based on a new semidefinite programming relaxation of a high-dimensional median. Previous estimators which assumed only existence of finitely-many moments of X either sacrifice sub-Gaussian performance or are only known to be computable via brute-force search procedures requiring time exponential in the dimension.

Based on https://arxiv.org/abs/1809.07425 to appear in Annals of Statistics.

09/27/2019
Jiapeng Zhang, Harvard
An Improved Sunflower Bound

Erd\H os and Rado proved the sunflower lemma: any family of sets of size $w$, with at least about $w^w$ sets, must contain a sunflower. The famous sunflower conjecture is that the bound on the number of sets can be improved to $c^w$ for some constant $c$. In this talk, we discuss an improved bound to about $(\log w)^{w}$. In fact, we prove the result for a robust notion of sunflowers, for which the bound we obtain is tight (up to constants in the exponent).

10/04/2019
Chris Umans, Caltech
Fast Generalized DFTs For All Finite Groups

For any finite group G, we give an arithmetic algorithm to compute generalized Discrete Fourier Transforms (DFTs) with respect to G, using essentially O(|G|^{ω/2}) operations. Here, ω is the exponent of matrix multiplication. Up to the dependence on ω, this settles a question that dates to the 80's.

I'll describe how to understand the familiar FFT algorithm in a much more general setting, the challenges that arise in this setting, and the ideas we use to overcome them. Beyond knowing what a group is, no specialized mathematical knowledge will be assumed.

Based on this paper: http://users.cms.caltech.edu/~umans/papers/U19.pdf to appear in FOCS 2019. 

10/11/2019
Scott Aaronson, UT
Quantum Supremacy and Its Applications

I'll discuss the complexity theory behind "random circuit sampling"---the task that was apparently used by Google this summer to achieve the first-ever demonstration of quantum computational supremacy, with a 53-qubit programmable superconducting device.  This will include a hardness reduction for the problem, as well as the best known time/space tradeoffs for solving it classically (both due to myself and Lijie Chen, CCC'2017).  I'll also describe a potential application of sampling-based quantum supremacy that I came up with last year, which involves generating cryptographically certified random bits, and which Google is working to demonstrate next.  I'll try to leave plenty of time for questions.

10/18/19
Barna Saha, Berkeley
Efficient Fine-Grained Algorithms

One of the greatest successes of computational complexity theory is the classification of countless fundamental computational problems into polynomial-time and NP-hard ones, two classes that are often referred to as tractable and intractable, respectively. However, this crude distinction of algorithmic efficiency is clearly insufficient when handling today's large scale of data. We need a finer-grained design and analysis of algorithms that pinpoints the exact exponent of polynomial running time, and a better understanding of when a speed-up is not possible. Over the years, many polynomial-time approximation algorithms were devised as an approach to bypass the NP-hardness obstacle of many discrete optimization problems. This area has developed into a rich field producing many algorithmic ideas and has lead to major advances in computational complexity. So far, however, a similar theory for high polynomial time problems to understand the trade-off between quality and running time is vastly lacking. 

In this presentation, I will give an overview of the newly growing field of fine-grained algorithms and complexity, and my contributions therein. This will include several problems on sequences, graphs and matrices.
 

10/25/2019
Justin Holmgren, Simons Institute
Extreme Hardness and Cryptographic Hashing

The random oracle model is a useful model for designing practical cryptographic protocols.  In this model, all parties are assumed to have black-box access to a shared uniformly random function.  Protocols that are secure in this model can then be heuristically transformed into plain-model (i.e., real world) cryptographic protocols by replacing the black box with a publicly agreed-upon hash function, such as SHA-256.

Unfortunately, this design methodology rests on shaky theoretical foundations.  There exist protocols that are secure in the random oracle model, but for which nochoice of hash function yields a secure plain-model protocol.  A major research topic is to determine specific achievable hash function properties that suffice for the security of cryptographic protocols, and to construct efficient hash functions that achieve these properties.

In this talk I will discuss several recent results on constructing ``good’’ hash functions, with applications ranging from complexity theory to verifiable computation and zero knowledge.  Specifically, I will focus on the tasks of (1) constructing collision-resistant hash families from generic, non-algebraic assumptions, and (2) constructing hash families that suffice for removing the interaction from interactive proofs via the Fiat-Shamir transform.

Based on joint works with Ran Canetti, Yilei Chen, Alex Lombardi, Guy Rothblum, Ron Rothblum, and Daniel Wichs.