Fall 2019 Theory Seminars will meet on Fridays from 2:30 -3:30 pm in GDC 4.304 unless otherwise noted. This schedule will be updated throughout the semester.

08/30/2019 - Hao Huang, Emory University: A Proof of the Sensitivity Conjecture
09/13/2019 - Kuan Cheng, UT: Deterministic Document Exchange Protocols and Error Correcting Codes for Edit Errors
09/20/2019 - Samuel Hopkins, UC Berkeley: How to Estimate the Mean of a (Heavy Tailed) Random Vector
09/27/2019 - Jiapeng Zhang, Harvard: An Improved Sunflower Bound *NOTE: Room changed to GDC 6.302.
10/04/2019 - Chris Umans, Caltech: Fast Generalized DFTs For All Finite Groups
10/11/2019 - Scott Aaronson, UT: Quantum Supremacy and Its Applications
10/18/2019 - Barna Saha, Berkeley: Efficient Fine-Grained Algorithms
10/25/2019 - Justin Holmgren, Simons Institute: Extreme Hardness and Cryptographic Hashing
11/01/2019 - Sam Kim, Stanford: Multi-Authority Attribute-Based Encryption from LWE in the OT Model
11/08/2019 - Aaron Sidford, Stanford: Faster Energy Maximization for Faster Maximum Flow
11/15/2019 - Leonid Gurvits, City College of New York: A Poly-time DETERMINISTIC Algorithm for Simply Exponential Approximation of the Permanent of PSD Matrices and Other Cool Quantum Things
11/22/2019 - Yury Makarychev, Toyota Technological Institute at Chicago: Dimensionality reduction for k-means and k-medians clustering
12/06/2019 - Tselil Schramm, Harvard & MIT: Subexponential LPs can approximate max-cut

Hao Huang, Emory University
A Proof of the Sensitivity Conjecture

In the n-dimensional hypercube graph, one can easily choose half of the vertices such that they induce an empty graph. However, having even just one more vertex would cause the induced subgraph to contain a vertex of degree at least \sqrt{n}. This result is best possible, and improves a logarithmic lower bound shown by Chung, Furedi, Graham and Seymour in 1988. In this talk we will discuss a very short algebraic proof of it.

As a direct corollary of this purely combinatorial result, the sensitivity and degree of every boolean function are polynomially related. This solves an outstanding foundational problem in theoretical computer science, the Sensitivity Conjecture of Nisan and Szegedy. 

Kuan Cheng, UT 
Deterministic Document Exchange Protocols and Error Correcting Codes for Edit Errors

Recently we studied two basic problems about correcting edit errors. The first one is document exchange, where two parties Alice and Bob hold two strings x and y with a bounded edit distance k. The goal is to have Alice send a short sketch to Bob, so that Bob can recover x based on y and the sketch. The second one is the fundamental problem of designing error correcting codes for edit errors, where the goal is to construct an explicit code to transmit a message x through a channel that can add at most k worst case insertions and deletions, so that the original message x can be successfully recovered at the other end of the channel.  

Both problems have been extensively studied for decades, and I will mainly talk about deterministic document exchange protocols and binary codes for insertions/deletions [CJLW18] and for insertions/deletions with transpositions [CJLW19]. These constructions all have sketch/redundancy lengths some log factors from optimal, which significantly improved previous results.

Samuel Hopkins, UC Berkeley
How to Estimate the Mean of a (Heavy Tailed) Random Vector

We study polynomial time algorithms for estimating the mean of a multivariate random vector under very mild assumptions: we assume only that the random vector X has finite mean and covariance. This allows for X to be heavy-tailed. In this setting, the radius of confidence intervals achieved by the empirical mean are exponentially larger in the case that X is Gaussian or sub-Gaussian. That is, the empirical mean is poorly concentrated.

We offer the first polynomial time algorithm to estimate the mean of X with sub-Gaussian-size confidence intervals under such mild assumptions. That is, our estimators are exponentially better-concentrated than the empirical mean. Our algorithm is based on a new semidefinite programming relaxation of a high-dimensional median. Previous estimators which assumed only existence of finitely-many moments of X either sacrifice sub-Gaussian performance or are only known to be computable via brute-force search procedures requiring time exponential in the dimension.

Based on https://arxiv.org/abs/1809.07425 to appear in Annals of Statistics.

Jiapeng Zhang, Harvard
An Improved Sunflower Bound

Erd\H os and Rado proved the sunflower lemma: any family of sets of size $w$, with at least about $w^w$ sets, must contain a sunflower. The famous sunflower conjecture is that the bound on the number of sets can be improved to $c^w$ for some constant $c$. In this talk, we discuss an improved bound to about $(\log w)^{w}$. In fact, we prove the result for a robust notion of sunflowers, for which the bound we obtain is tight (up to constants in the exponent).

Chris Umans, Caltech
Fast Generalized DFTs For All Finite Groups

For any finite group G, we give an arithmetic algorithm to compute generalized Discrete Fourier Transforms (DFTs) with respect to G, using essentially O(|G|^{ω/2}) operations. Here, ω is the exponent of matrix multiplication. Up to the dependence on ω, this settles a question that dates to the 80's.

I'll describe how to understand the familiar FFT algorithm in a much more general setting, the challenges that arise in this setting, and the ideas we use to overcome them. Beyond knowing what a group is, no specialized mathematical knowledge will be assumed.

Based on this paper: http://users.cms.caltech.edu/~umans/papers/U19.pdf to appear in FOCS 2019. 

Scott Aaronson, UT
Quantum Supremacy and Its Applications

I'll discuss the complexity theory behind "random circuit sampling"---the task that was apparently used by Google this summer to achieve the first-ever demonstration of quantum computational supremacy, with a 53-qubit programmable superconducting device.  This will include a hardness reduction for the problem, as well as the best known time/space tradeoffs for solving it classically (both due to myself and Lijie Chen, CCC'2017).  I'll also describe a potential application of sampling-based quantum supremacy that I came up with last year, which involves generating cryptographically certified random bits, and which Google is working to demonstrate next.  I'll try to leave plenty of time for questions.

Barna Saha, Berkeley
Efficient Fine-Grained Algorithms

One of the greatest successes of computational complexity theory is the classification of countless fundamental computational problems into polynomial-time and NP-hard ones, two classes that are often referred to as tractable and intractable, respectively. However, this crude distinction of algorithmic efficiency is clearly insufficient when handling today's large scale of data. We need a finer-grained design and analysis of algorithms that pinpoints the exact exponent of polynomial running time, and a better understanding of when a speed-up is not possible. Over the years, many polynomial-time approximation algorithms were devised as an approach to bypass the NP-hardness obstacle of many discrete optimization problems. This area has developed into a rich field producing many algorithmic ideas and has lead to major advances in computational complexity. So far, however, a similar theory for high polynomial time problems to understand the trade-off between quality and running time is vastly lacking. 

In this presentation, I will give an overview of the newly growing field of fine-grained algorithms and complexity, and my contributions therein. This will include several problems on sequences, graphs and matrices.

Justin Holmgren, Simons Institute
Extreme Hardness and Cryptographic Hashing

The random oracle model is a useful model for designing practical cryptographic protocols.  In this model, all parties are assumed to have black-box access to a shared uniformly random function.  Protocols that are secure in this model can then be heuristically transformed into plain-model (i.e., real world) cryptographic protocols by replacing the black box with a publicly agreed-upon hash function, such as SHA-256.

Unfortunately, this design methodology rests on shaky theoretical foundations.  There exist protocols that are secure in the random oracle model, but for which nochoice of hash function yields a secure plain-model protocol.  A major research topic is to determine specific achievable hash function properties that suffice for the security of cryptographic protocols, and to construct efficient hash functions that achieve these properties.

In this talk I will discuss several recent results on constructing ``good’’ hash functions, with applications ranging from complexity theory to verifiable computation and zero knowledge.  Specifically, I will focus on the tasks of (1) constructing collision-resistant hash families from generic, non-algebraic assumptions, and (2) constructing hash families that suffice for removing the interaction from interactive proofs via the Fiat-Shamir transform.

Based on joint works with Ran Canetti, Yilei Chen, Alex Lombardi, Guy Rothblum, Ron Rothblum, and Daniel Wichs.

Sam Kim, Stanford
Multi-Authority Attribute-Based Encryption from LWE in the OT Model

An Attribute-Based Encryption (ABE) scheme is an advanced form of encryption scheme that allows users to enforce complex access control of encrypted data. The concept of ABE is a well-studied notion in cryptography with many existing constructions from standard cryptographic assumptions. However, in almost all of these existing ABE constructions, a single central authority must generate the decryption keys for each user in the entire system. As the central authority must completely be trusted in these constructions, they are not ideal for many practical applications. 

In this talk, I will present a new Multi-Authority Attribute-Based Encryption (MA-ABE) scheme where the decryption keys for each user can be generated by multiple key-generating authorities. Our MA-ABE scheme is secure assuming the hardness of the Learning with Errors assumption and can support access policies that can be represented by circuits. All previous MA-ABE schemes relied on bilinear maps and supported access policies that are representable as monotone boolean formulas. 

Our construction works in a new model called the OT model, which we believe is a compelling alternative to the traditional MA-ABE model. I will discuss both definitions and constructions in the talk. 

Aaron Sidford, Stanford
Faster Energy Maximization for Faster Maximum Flow

The maximum flow problem is one of the most well-studied problems in combinatorial optimization. It encompasses a broad of cut, matching, and scheduling problems and is a key a proving ground for new techniques in continuous optimization and algorithmic graph theory. In this talk I will present recent advances in obtaining provably faster algorithms for solving the maximum flow problem. In particular, I will show how to solve the maximum flow problem on m-edge n-vertex unit-capacity directed graphs in time almost m^11/8 improving upon the previous best running time of almost m^10/7. Interestingly, this result will stem by considering the interplay between recent advances in interior point methods for solving maximum flow on directed graphs with recent advances in solving classes of undirected flow problems. 

This talk reflects joint work with Yang P. Liu and will be based primarily on arxiv:1910.14276.

Leonid Gurvits, City College of New York
A Poly-time DETERMINISTIC Algorithm for Simply Exponential Approximation of the Permanent of PSD Matrices and Other Cool Quantum Things

I will first explain the importance of the permanent of PSD matrices in Quantum Linear Optics, and put it into more general concept of so called quantum permanent of bipartite density matrices.

In this talk I will show how to get a  approximation algorithm for the permanent of PSD matrices, where cn is a universal constant (prior to this result not even randomized poly-time algorithm with (even) worst case simply exponential relative error was known); and en approximation algorithm for the quantum permanent of separable, aka non-entangled, bipartite density matrices.

Our algorithm is based on a semidefinite program with a convex  nonlinear objective. Our proof of its correctness is essentially a New Van Der Waerden Like Lower Bound: What is the smallest value of the permanent of projectors on images of n×n correlation matrices?

If time permits, I will talk about “Other Cool Quantum Things”, including recently refuted permanent –on-top conjecture.

Based on joint works with Nima Anari, Shayan Oveis Gharan, Amin Saberi, Dmitry Ivanov.

Yury Makarychev, Toyota Technological Institute at Chicago
Dimensionality reduction for k-means and k-medians clustering

Consider an instance of Euclidean k-means or k-medians clustering. We show that the cost of the optimal solution is preserved up to a factor of (1+ε)  under a projection onto a random O(log(k/ε)/ε^2)-dimensional subspace. Further, the cost of every clustering is preserved within (1+ε). More generally, our result applies to any dimensionality reduction satisfying a mild sub-Gaussian-tail condition. Our bound on the dimension is nearly optimal. Additionally, our result applies to Euclidean k-clustering with the distances raised to the p-th power for any constant p.

Based on a joint work with Konstantin Makarychev and Ilya Razenshteyn.

Tselil Schramm, Harvard & MIT
Subexponential LPs can approximate max-cut​

The theory-of-approximation orthodoxy holds that linear programs (LPs) perform poorly for max-cut and other constraint satisfaction problems, and that semidefinite programming and spectral methods are needed for nontrivial approximations. In this talk, I will describe two recent results that defy this creed: we show that LPs of subexponential size can (1) approximate max-cut to a factor strictly greater than 0.5, and (2) certify max-cut in graphs of bounded spectral radius. This resolves the linear extension complexity of max-cut. Similar results hold for Unique Games and other constraint satisfaction problems.

Based on joint works with Ryan O'Donnell, Sam Hopkins, and Luca Trevisan.​