Spring 2020 Theory Seminars will meet on Fridays from 2:30 -3:30 pm in GDC 4.304 unless otherwise noted. This schedule will be updated throughout the semester.

01/24/2020 - Dana Moshkovitz, UT: Hardness of Unique Games Under Assumptions
02/07/2020 - Anna Gal, UT: Lower Bounds for (Non-monotone) Comparator Circuits
02/14/2020 - Eric Price, UT: Separations and equivalences between trunstile streaming and linear sketching
02/21/2020 - David Zuckerman, UT: Nearly Optimal Pseudorandomness From Hardness
02/28/2020 - John Wright, MIT: NEEXP in MIP*
03/06/2020 - David Woodruff, Carnegie Mellon: Towards a Zero-One Law for Column Subset Selection

01/24/2020
Dana Moshkovitz, University of Texas at Austin
Hardness of Unique Games Under Assumptions

We prove NP-hardness of Boolean unique games with gap 1-delta vs. 1-C*delta for any C>1 and sufficiently small delta>0 under an assumption on the hardness of certain non-unique games.
The proof relies on a new concentration result for functions in Gaussian space. The result relates the low degree part of a restriction of the function to a random hyperplane and the restriction of the low degree part of the function to the hyperplane.

02/07/2020
Anna Gal, University of Texas at Austin
Lower Bounds for (Non-monotone) Comparator Circuits

Comparator circuits are a natural model for extending the study of circuits with limited access to results of intermediate computation, beyond Boolean formulas. Comparator circuits are believed to be weaker than general Boolean circuits, but they can simulate branching programs and Boolean formulas. Like branching programs, comparator circuits are known to be stronger than Boolean formulas, as it is easy to construct linear size comparator circuits computing Parity, while Parity requires quadratic formula size.

We prove the first superlinear lower bounds in the general (non-monotone) version of this model for an explicitly defined function. More precisely, we prove that the $n$-bit Element Distinctness function requires $\Omega( (n/ \log n)^{3/2})$ size comparator circuits.

Joint work with Robert Robere.

02/14/2020
Eric Price, University of Texas at Austin
Separations and equivalences between turnstile streaming and linear sketching

A longstanding observation, which was partially proven in Li-Nguyen-Woodruff '14, is that any turnstile streaming algorithm can be implemented as a linear sketch (the reverse is trivially true).  We study the relationship between turnstile streaming and linear sketching algorithms in more detail, giving both new separations and new equivalences between the two models.

It was shown in [LNW14] that, if a turnstile algorithm works for arbitrarily long streams with arbitrarily large coordinates at intermediate stages of the stream, then the turnstile algorithm is equivalent to a linear sketch.  We show separations of the opposite form: if either the stream length or the maximum value of the stream are substantially restricted, there exist problems where linear sketching is exponentially harder than turnstile streaming.

This work is joint with John Kallaugher.  It will appear at STOC 2020 and is available at https://arxiv.org/abs/1905.02358 .

02/21/2020
David Zuckerman, University of Texas at Austin
Nearly Optimal Pseudorandomness From Hardness

Existing proofs that deduce BPP=P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown. Specifically, assuming exponential lower bounds against certain nondeterministic circuits, we convert any randomized algorithm running in time t to a deterministic one running in time t^{2+alpha} for an arbitrarily small constant alpha > 0. Such a slowdown is nearly optimal, as, under complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing).

Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+\alpha)log s, under the above assumption. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.

This is joint work with Dean Doron, Dana Moshkovitz, and Justin Oh.

02/28/2020
John Wright, MIT
NEEXP in MIP*

A long-standing puzzle in quantum complexity theory is to understand the power of the class MIP* of multiprover interactive proofs with shared entanglement. This question is closely related to the study of entanglement through non-local games, which dates back to the pioneering work of Bell. In this work we show that MIP* contains NEEXP (non-deterministic doubly-exponential time), exponentially improving the prior lower bound of NEXP due to Ito and Vidick. Our result shows that shared entanglement exponentially increases the power of these proof systems, as the class MIP of multiprover interactive proofs without shared entanglement is known to be equal to NEXP.

This is joint work with Anand Natarajan.

03/06/2020
David Woodruff, Carnegie Mellon
Towards a Zero-One Law for Column Subset Selection

There are a number of approximation algorithms for NP-hard versions of low rank approximation, such as finding a rank-k matrix minimizing the sum of absolute values of differences to a given n x n matrix A, or more generally finding a rank-k matrix which minimizes the sum of p-th powers of absolute values of differences to A. Many of these algorithms are linear time column subset selection algorithms, returning a subset of columns whose cost is no more than a poly(k) factor larger than the cost of the best rank-k matrix. The above error measures are special cases of the following general entrywise low rank approximation problem: given an arbitrary function f, find a rank-k matrix B which minimizes |A-B|_f. A natural question is which functions f admit efficient approximation algorithms? Indeed, this is a central question of recent work studying generalized low rank models. In this work we give approximation algorithms for every function which is approximately monotone and satisfies an approximate triangle inequality, and we show both of these conditions are necessary. Further, our algorithm is efficient if the function admits an efficient approximate regression algorithm. Our approximation algorithms handle functions which are not even scale-invariant, such as the Huber loss function, which we show have very different structural properties than norms, e.g., one can show the lack of scale-invariance causes any column subset selection algorithm to provably require a  factor sqrt{log n} larger number of columns than p-norms; nevertheless we design the first efficient column subset selection algorithms for such error measures.

Joint work with Zhao Song and Peilin Zhong.