Fall 2022

Fall 2022 Theory Seminars will meet on Fridays from 1:30 - 2:30 pm in GDC 4.302. This schedule will be updated throughout the semester.


09/02/2022 - David Wu, UT: Batch Arguments for NP and More from Standard Bilinear Group Assumptions
09/09/2022 - Joe Neeman, UT: Bubble Clusters in Geometry and Computer Science
09/16/2022 - Chinmay Nirkhe, IBM Quantum: NLTS Hamiltonians from Good Quantum Codes
09/23/2022 - Sam Hopkins, MIT: Matrix Discrepancy from Quantum Communication **BEGINS AT 1:45 PM**
10/07/2022 - 
10/14/2022 - Danupon Nanogkai, Max Planck Institute for Informatics
10/21/2022 - Pravesh Kothari, CMU
10/28/2022 - Nicole Wein, DIMACS
11/04/2022 - Shuichi Hirahara, National Institute of Informatics, Tokyo, Japan
11/11/2022 - Nikhil Bansal, University of Michigan
11/18/2022 - Huacheng Yu, Princeton
12/02/2022 - 
12/09/2022 - 

David Wu, UT
Batch Arguments for NP and More from Standard Bilinear Group Assumptions

Non-interactive batch arguments for NP provide a way to amortize the cost of NP verification across multiple instances. They enable a computationally-bounded prover to convince a verifier of multiple NP statements with communication that is much smaller than the total witness length and with verification time that is smaller than individually checking each instance.

In this talk, I describe a new and direct approach for constructing batch arguments from standard cryptographic assumptions on groups with bilinear maps (e.g., the subgroup decision or the k-Lin assumption). Our approach avoids heavy tools like correlation-intractable hash functions or probabilistically-checkable proofs common to previous approaches. In turn, we also obtain the first construction of batch arguments for NP from standard bilinear map assumptions.

As corollaries to our main construction, we also obtain a delegation scheme for RAM programs (also known as a succinct non-interactive argument for P) as well as an aggregate signature scheme supporting bounded aggregation from standard bilinear map assumptions in the plain model.

Joint work with Brent Waters.

Joe Neeman, UT
Bubble Clusters in Geometry and Computer Science

On the plane, a circle is the smallest-perimeter shape with a given area. In three (and higher) dimensions, it's a sphere. Now what if you want to enclose two (or more) given areas using a minimal perimeter? You can do better than two circles, because if you smoosh two circles together then you can reuse part of the perimeter for both shapes. For two sets, the problem (known as the "double bubble conjecture") was solved about 20 years ago.

I'll talk about some recent progress in this area (joint work with E. Milman), some open problems, and the relevance of all this to computer science.

Chinmay Nirkhe, IBM Quantum
NLTS Hamiltonians from Good Quantum Codes

The quantum PCP conjecture is one of the central open questions in quantum complexity theory. It asserts that calculating even a rough approximation to the ground energy of a local Hamiltonian is intractable even for quantum devices. The widely believed separation between the complexity classes NP and QMA necessitates that polynomial length classical proofs do not exist for calculating the ground energy. This further implies that low-energy states of local Hamiltonians cannot be described by constant depth quantum circuits. 

The ``No low-energy trivial states (NLTS)'' conjecture by Freedman and Hastings posited the existence of such Hamiltonians. This talk will describe a line of research culminating in a proof of the NLTS conjecture by Anshu, Breuckmann, and Nirkhe. The construction is based on quantum error correction and in the talk, I will elaborate on how error correction, local Hamiltonians, and low-depth quantum circuits are related.

09/23/2022 **BEGINS AT 1:45 PM**
Sam Hopkins, MIT
Matrix Discrepancy from Quantum Communication

Discrepancy is the mathematical study of balance; it has broad applications across TCS. For example, given vectors v_1,...,v_n \in R^m, one may ask how well they may be split into two groups S and barS to minimize || \sum_{i \in S} v_i - \sum_{i \in barS} v_i ||, where ||...|| is some norm. Spencer's famous result, "6 standard deviations suffice", gives essentially a full characterization in the setting that the v_i's are bounded in infinity norm and ||...|| is itself the infinity norm. A natural question, the "Matrix Spencer Conjecture", asks for a non-commutative version of this picture: can matrices be balanced as well as vectors can? This question has eluded much progress despite substantial attention in the last 10 years.

In this talk I will describe a new communication-based perspective on discrepancy theory, showing that discrepancy upper bounds (i.e., theorems saying that a certain amount of balance is always possible) are actually equivalent to communication lower bounds (i.e. theorems saying that certain communication problems lack efficient protocols). This perspective opens a line of attack on the Matrix Spencer conjecture, via a lower-bounds problem in quantum communication which is of independent interest. In the talk (time permitting), I will show a new proof of Spencer's theorem via communication lower bounds, and I will describe how that proof can be generalized to prove a special case of the Matrix Spencer Conjecture, where the matrices to be balanced have polynomially-small rank.

Based on joint work with Prasad Raghavendra and Abhishek Shetty.

Fall 2021

08/27/2021 - Alexandros Hollender, University of Oxford: The Complexity of Gradient Descent: CLS = PPAD ∩ PLS
09/03/2021 - Nathan Klein, University of Washington: A (Slightly) Improved Approximation Algorithm for Metric TSP
09/10/2021 - Vishesh Jain, Stanford University, Stein Fellow: Towards the Sampling Lovász Local Lemma
09/17/2021 - Seth Pettie, University of Michigan: Information Theoretic Limits of Cardinality Estimation: Fisher Meets Shannon
10/01/2021 - Roei Tell, IAS+DIMACS: Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise
10/08/2021 - Eric Vigoda, UCSB: Computational Phase Transition for Approximate Counting
10/15/2021 - Srikanth Srinivasan, Aarhus University: Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits
10/22/2021 - Amnon Ta-Shma, Tel-Aviv University: Expander Random Walks: A Fourier-Analytic Approach
11/05/2021 -  Joe Neeman, UT Austin: Robust Testing of Low-Dimensional Functions *Hybrid Event: To be hosted virtually and in person in GDC 4.304*
11/12/2021 - Virgina Williams, MIT: Hardness of Approximate Diameter: Now for Undirected Graphs
11/19/2021 -  Minshen Zhu, Purdue University: Exponential Lower Bounds for Locally Decodable and Correctable Codes for Insertions and Deletions * Event Begins at 11:00 AM*
12/03/2021 - Shai Evra, Hebrew University of Jerusalem: Locally Testable Codes with Constant Rate, Distance, and Locality

Alexandros Hollender, University of Oxford
The Complexity of Gradient Descent: CLS = PPAD ∩ PLS

We study search problems that can be solved by performing Gradient Descent on a bounded convex polytopal domain and show that this class is equal to the intersection of two well-known classes: PPAD and PLS. As our main underlying technical contribution, we show that computing a Karush-Kuhn-Tucker (KKT) point of a continuously differentiable function over the domain [0,1]2 is PPAD ∩ PLS-complete. This is the first natural problem to be shown complete for this class. Our results also imply that the class CLS (Continuous Local Search) - which was defined by Daskalakis and Papadimitriou as a more "natural" counterpart to PPAD ∩ PLS and contains many interesting problems - is itself equal to PPAD ∩ PLS.

Nathan Klein, University of Washington
A (Slightly) Improved Approximation Algorithm for Metric TSP

I will describe work in which we obtain a 3/2-epsilon approximation algorithm for metric TSP, for some epsilon>10^{-36}. This slightly improves over the classical 3/2 approximation algorithm due to Christofides [1976] and Serdyukov [1978]. The talk will focus on giving an overview of the key ideas involved, such as properties of Strongly Rayleigh distributions and the structure of near minimum cuts. This is joint work with Anna Karlin and Shayan Oveis Gharan. 

Vishesh Jain, Stanford University, Stein Fellow
Towards the Sampling Lovász Local Lemma

For a constraint satisfaction problem which satisfies the condition of the Lovász local lemma (LLL), the celebrated algorithm of Moser and Tardos allows one to efficiently find a satisfying assignment. In the past few years, much work has gone into understanding whether one can efficiently sample from (approximately) the uniform distribution on satisfying assignments under LLL-like conditions. I will discuss recent progress on this problem, joint with Huy Tuan Pham (Stanford) and Thuy Duong Vuong (Stanford).

Seth Pettie, University of Michigan
Information Theoretic Limits of Cardinality Estimation: Fisher Meets Shannon

Estimating the cardinality (number of distinct elements) of a large multiset is a classic problem in streaming and sketching, dating back to Flajolet and Martin's classic Probabilistic Counting (PCSA) algorithm from 1983. In this paper we study the intrinsic tradeoff between the space complexity of the sketch and its estimation error in the random oracle model. We define a new measure of efficiency for cardinality estimators called the Fisher-Shannon (Fish) number H/I. It captures the tension between the limiting Shannon entropy (H) of the sketch and its normalized Fisher information (I), which characterizes the variance of a statistically efficient, asymptotically unbiased estimator.

Our results are as follows:

  • We prove that all base-q variants of Flajolet and Martin's PCSA sketch have Fish-number H0/I0≈1.98016 and that every base-q variant of (Hyper)LogLog has Fish-number worse than H0/I0, but that they tend to H0/I0 in the limit as q→∞. Here H0,I0 are precisely defined constants.

  • We describe a sketch called Fishmonger that is based on a smoothed, entropy-compressed variant of PCSA with a different estimator function. It is proved that with high probability, Fishmonger processes a multiset of [U] such that at all times, its space is O(log2logU)+(1+o(1))(H0/I0)b≈1.98b bits and its standard error is 1/sqrt(b).

We give circumstantial evidence that H0/I0 is the optimum Fish-number of mergeable sketches for Cardinality Estimation. We define a class of linearizable sketches and prove that no member of this class can beat H0/I0. The popular mergeable sketches are, in fact, also linearizable.

Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise

Textbook hardness-to-randomness converts circuit lower bounds into PRGs. But is this black-box approach really necessary for derandomization? In this talk I'll show how to revamp the classical hardness-to-randomness framework, converting new types of *uniform lower bounds* into *non-black-box derandomization*. This yields conclusions such as promiseBPP = promiseP without PRGs, and reveals a close connection between worst-case derandomization and the new types of uniform lower bounds. Another instantiation of this new framework allows to deduce, under plausible hypotheses, that randomness can be eliminated at essentially no observable cost when solving decision problems.

Based on joint work with Lijie Chen.

Eric Vigoda, UCSB
Computational Phase Transition for Approximate Counting

Spectral independence is a new technique, recently introduced by Anari et al (2020), for establishing rapid mixing results for Markov Chain Monte Carlo (MCMC) algorithms. We show that spectral independence implies optimal convergence rate for a variety of MCMC algorithms. This yields a dramatic computational phase transition for various approximate counting problems such as weighted independent sets.

This talk is based on joint works with Zongchen Chen (MIT) and Kuikui Liu (Washington).

Srikanth Srinivasan, Aarhus University
Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits

Every multivariate polynomial P(x_1,...,x_n) can be written as a sum of monomials, i.e. a sum of products of variables and field constants. In general, the size of such an expression is the number of monomials that have a non-zero coefficient in P.

What happens if we add another layer of complexity, and consider sums of products of sums (of variables and field constants) expressions? Now, it becomes unclear how to prove that a given polynomial P(x_1,...,x_n) does not have small expressions. In this result, we solve exactly this problem.

More precisely, we prove that certain explicit polynomials have no polynomial-sized "Sigma-Pi-Sigma" (sums of products of sums) representations. We can also show similar results for Sigma-Pi-Sigma-Pi, Sigma-Pi-Sigma-Pi-Sigma and so on for all "constant-depth" expressions.

Based on joint work with Nutan Limaye (ITU Copenhagen + IIT Bombay) and Sébastien Tavenas (USMB + Univ. Grenoble, CNRS).

Amnon Ta-Shma, Tel-Aviv University
Expander Random Walks: A Fourier-Analytic Approach

We consider the following question: Assume the vertices of an expander graph are labelled by {1,-1}. What test functions can or cannot distinguish t independent samples from those obtained by a random walk?

We prove all symmetric functions are fooled by random walks on expanders with constant spectral gap. We use Fourier analysis noticing that the bias of character $\chi_S$  is determined not only by the cardinality of S, but also by the distances between the elements in S.

The talk is based on two papers:
- Expander Random Walks: A Fourier-Analytic Approach with Gil Cohen and Noam Peri, and,
- Expander Random Walks: The General Case and Limitations with Gil Cohen, Dor Minzer, Shir Peleg and Aaron Potechin.

Joe Neeman, UT Austin
Robust Testing of Low-Dimensional Functions  *Hybrid Event: To be hosted virtually and in person in GDC 4.304*

A natural problem in high-dimensional inference is to decide if a classifier f:Rn→{−1,1} depends on a small number of linear directions of its input data. Call a function g:Rn→{−1,1}, a linear k-junta if it is completely determined by some k-dimensional subspace of the input space. A recent work of the authors showed that linear k-juntas are testable. Thus there exists an algorithm to distinguish between: 1. f:Rn→{−1,1} which is a linear k-junta with surface area s, 2. f is ϵ-far from any linear k-junta with surface area (1+ϵ)s, where the query complexity of the algorithm is independent of the ambient dimension n.

Following the surge of interest in noise-tolerant property testing, in this paper we prove a noise-tolerant (or robust) version of this result. Namely, we give an algorithm which given any c>0, ϵ>0, distinguishes between 1. f:Rn→{−1,1} has correlation at least c with some linear k-junta with surface area s. 2. f has correlation at most c−ϵ with any linear k-junta with surface area at most s. The query complexity of our tester is kpoly(s/ϵ).

Using our techniques, we also obtain a fully noise tolerant tester with the same query complexity for any class C of linear k-juntas with surface area bounded by s. As a consequence, we obtain a fully noise tolerant tester with query complexity kO(poly(logk/ϵ)) for the class of intersection of k-halfspaces (for constant k) over the Gaussian space. Our query complexity is independent of the ambient dimension n. Previously, no non-trivial noise tolerant testers were known even for a single halfspace.

Virgina Williams, MIT
Hardness of Approximate Diameter: Now for Undirected Gra

Approximating the graph diameter is a basic task of both theoretical and practical interest. A simple folklore algorithm can output a 2-approximation to the diameter in linear time by running BFS from an arbitrary vertex. It has been open whether a better approximation is possible in near-linear time.

A series of papers on fine-grained complexity have led to strong hardness results for diameter, culminating in our recent tradeoff curve in the upcoming FOCS'21 joint with Ray Li and Mina Dalirrooyfard, showing that under the Strong Exponential Time Hypothesis (SETH), for any integer k≥2 and δ>0, a 2−1/k−δ approximation for diameter in m-edge graphs requires mn^{1+1/(k−1)−o(1)} time. In particular, the simple linear time 2-approximation algorithm is optimal.

In this talk I will give an overview of the known algorithms and fine-grained lower bounds for diameter, including the SETH-based optimality of the 2-approximation algorithm.

Minshen Zhu, Purdue University
Exponential Lower Bounds for Locally Decodable and Correctable Codes for Insertions and Deletions * Event Begins at 11:00 AM*

Locally Decodable Codes (LDCs) are error-correcting codes for which individual message symbols can be quickly recovered despite errors in the codeword. LDCs for Hamming errors have been studied extensively in the past few decades, where a major goal is to understand the amount of redundancy that is necessary and sufficient to decode from large amounts of error, with small query complexity. Despite exciting progress, we still don’t have satisfactory answers in several important parameter regimes. For example, in the case of 3-query LDCs, the gap between existing constructions and lower bounds is superpolynomial in the message length. In this work we study LDCs for insertion and deletion errors, called Insdel LDCs. Their study was initiated by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), who gave a reduction from Hamming LDCs to Insdel LDCs with a small blowup in the code parameters. On the other hand, the only known lower bounds for Insdel LDCs come from those for Hamming LDCs, thus there is no separation between them. Here we prove new, strong lower bounds for the existence of Insdel LDCs. In particular, we show that 2-query linear Insdel LDCs do not exist, and give an exponential lower bound for the length of all q-query Insdel LDCs with constant q. For q ≥ 3 our bounds are exponential in the existing lower bounds for Hamming LDCs. Furthermore, our exponential lower bounds continue to hold for adaptive decoders, and even in private-key settings where the encoder and decoder share secret randomness. This exhibits a strict separation between Hamming LDCs and Insdel LDCs. Our strong lower bounds also hold for the related notion of Insdel LCCs (except in the private-key setting), due to an analogue to the Insdel notions of a reduction from Hamming LCCs to LDCs. Our techniques are based on a delicate design and analysis of hard distributions of insertion and deletion errors, which depart significantly from typical techniques used in analyzing Hamming LDCs .

Shai Evra, Hebrew University of Jerusalem
Locally Testable Codes with Constant Rate, Distance, and Locality

A locally testable code (LTC) is an error correcting code that has a property-tester. The tester reads q bits that are randomly chosen, and rejects words with probability proportional to their distance from the code. The parameter q is called the locality of the tester.

LTCs were initially studied as important components of PCPs, and since then the topic has evolved on its own. High rate LTCs could be useful in practice: before attempting to decode a received word, one can save time by first quickly testing if it is close to the code.

An outstanding open question has been whether there exist "c3-LTCs", namely LTCs with *c*onstant rate, *c*onstant distance, and *c*onstant locality.

In this work we construct such codes based on a new two-dimensional complex which we call a left-right Cayley complex. This is essentially a graph which, in addition to vertices and edges, also has squares. Our codes can be viewed as a two-dimensional version of (the one-dimensional) expander codes, where the codewords are functions on the squares rather than on the edges.

Spring 2022

02/18/2022 - Oliver Korten, Columbia: The Hardest Explicit Construction
03/11/2022 - Matthew Ferland, USC: Winning the War by (Strategically) Losing Battles: Settling the Complexity of Grundy-Values in Undirected Geography
03/25/2022 -  Lisa Sauermann, MIT: List-decoding for Reed-Solomon codes
04/01/2022 - Kevin Tian, Stanford University: Semi-Random Sparse Recovery in Nearly-Linear Time
04/08/2022 - Debmalya Panigrahi, Duke University: Recent Results in Minimum Cut Algorithms
04/15/2022 - William Hoza, Simons Institute: Fooling Constant-Depth Threshold Circuits
04/22/2022 - Sushant Sachdeva, University of Toronto: Almost Linear Time Algorithms for Max-Flow and More
04/29/2022 - John Kallaugher, Sandia National Laboratories: A Quantum Advantage for a Natural Streaming Problem **HYBRID EVENT in GDC 4.304 - Event will begin at 2:00PM.**
05/06/2022 - Dean Doron, Ben-Gurion University: Approximating Large Powers of Stochastic Matrices in Small Space

[Anchor] 02/18/2022
Oliver Korten, Columbia
The Hardest Explicit Construction

We investigate the complexity of explicit construction problems, where the goal is to produce a particular object of size n possessing some pseudorandom property in time polynomial in n. We give overwhelming evidence that APEPP, defined originally by Kleinberg et al., is the natural complexity class associated with explicit constructions of objects whose existence follows from the probabilistic method, by placing a variety of such construction problems in this class. We then demonstrate that a result of Jeřábek on provability in Bounded Arithmetic, when reinterpreted as a reduction between search problems, shows that constructing a truth table of high circuit complexity is complete for APEPP under PNP reductions. This illustrates that Shannon's classical proof of the existence of hard boolean functions is in fact a universal probabilistic existence argument: derandomizing his proof implies a generic derandomization of the probabilistic method. As a corollary, we prove that EXPNP contains a language of circuit complexity 2^n^Ω(1) if and only if it contains a language of circuit complexity 2^n/2n. Finally, for several of the problems shown to lie in APEPP, we demonstrate direct polynomial time reductions to the explicit construction of hard truth tables.

[Anchor] 03/11/2022
Matthew Ferland, USC
Winning the War by (Strategically) Losing Battles: Settling the Complexity of Grundy-Values in Undirected Geography

We settle two long-standing complexity-theoretical questions-open since 1981 and 1993-in combinatorial game theory (CGT).

We prove that the Grundy value (a.k.a. nim-value, or nimber) of Undirected Geography is PSPACE-complete to compute. This exhibits a stark contrast with a result from 1993 that Undirected Geography is polynomial-time solvable. By distilling to a simple reduction, our proof further establishes a dichotomy theorem, providing a "phase transition to intractability" in Grundy-value computation, sharply characterized by a maximum degree of four: The Grundy value of Undirected Geography over any degree-three graph is polynomial-time computable, but over degree-four graphs-even when planar and bipartite-is PSPACE-hard. Additionally, we show, for the first time, how to construct Undirected Geography instances with Grundy value ∗n and size polynomial in n.

We strengthen a result from 1981 showing that sums of tractable partisan games are PSPACE-complete in two fundamental ways. First, since Undirected Geography is an impartial ruleset, we extend the hardness of sums to impartial games, a strict subset of partisan. Second, the 1981 construction is not built from a natural ruleset, instead using a long sum of tailored short-depth game positions. We use the sum of two Undirected Geography positions to create our hard instances. Our result also has computational implications to Sprague-Grundy Theory (1930s) which shows that the Grundy value of the disjunctive sum of any two impartial games can be computed-in polynomial time-from their Grundy values. In contrast, we prove that assuming PSPACE ≠ P, there is no general polynomial-time method to summarize two polynomial-time solvable impartial games to efficiently solve their disjunctive sum.

[Anchor] 03/25/2022
Lisa Sauermann, MIT
List-decoding for Reed-Solomon codes

Reed-Solomon codes are an important and intensively studied class of error-correcting codes. After giving some background, this talk will discuss the so-called list-decoding problem for Reed-Solomon codes. More specifically, we prove that for any fixed list-decoding parameters, there exist Reed-Solomon codes with a certain rate, which is optimal up to a constant factor. This in particular answers a question of Guo, Li, Shangguan, Tamo, and Wootters about list-decodability of Reed-Solomon codes with radius close to 1. Joint work with Asaf Ferber and Matthew Kwan.

[Anchor] 04/01/2022
Kevin Tian, Stanford University
Semi-Random Sparse Recovery in Nearly-Linear Time

Sparse recovery is one of the most fundamental and well-studied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearly-linear time) iterative methods. However, these latter "fast algorithms" have previously been observed to be brittle in various real-world settings.

We investigate the brittleness of fast sparse recovery algorithms to generative model changes through the lens of studying their robustness to a "helpful" semi-random adversary, a framework which tests whether an algorithm overfits to input assumptions. We consider the following basic model: let A be an nxd measurement matrix which contains an unknown mxd subset of rows G which are bounded and satisfy the restricted isometry property (RIP), but is otherwise arbitrary. Letting x* in R^d be s-sparse, and given either exact measurements b = Ax* or noisy measurements b = Ax* + xi, we design algorithms recovering x* information-theoretically optimally in nearly-linear time. We extend our algorithm to hold for weaker generative models relaxing our planted RIP row subset assumption to a natural weighted variant, and show that our method's guarantees naturally interpolate the quality of the measurement matrix to, in some parameter regimes, run in sublinear time.

Our approach differs from that of prior fast iterative methods with provable guarantees under semi-random generative models (Cheng and Ge '18, Li et al. '20), which typically separate the problem of learning the planted instance from the estimation problem, i.e. they attempt to first learn the planted "good" instance (in our case, the matrix G). However, natural conditions on a submatrix which make sparse recovery tractable, such as RIP, are NP-hard to verify and hence first learning a sufficient row reweighting appears challenging. We eschew this approach and design a new iterative method, tailored to the geometry of sparse recovery, which is provably robust to our semi-random model. Our hope is that our approach opens the door to new robust, efficient algorithms for other natural statistical inverse problems.

Based on joint work with Jonathan Kelner, Jerry Li, Allen Liu, and Aaron Sidford

[Anchor] 04/08/2022
Debmalya Panigrahi, Duke University
Recent Results in Minimum Cut Algorithms

Minimum cut problems are among the most basic questions in algorithm design. In the last few years, there has been a resurgence in new results in this domain, resulting in the first improvements in many decades for problems such as global min-cut, vertex min-cut, all-pairs min-cut, etc. In this talk, I will explore some of these results, focusing on the broad themes and techniques that have driven this progress. 

This talk will be based on papers that have appeared in FOCS '20, STOC '21, and FOCS '21.

[Anchor] 04/15/2022
William Hoza, Simons Institute
Fooling Constant-Depth Threshold Circuits

We present the first non-trivial pseudorandom generator (PRG) for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth d and n^{1 + delta} wires, where delta = exp(-O(d)), using seed length O(n^{1 - delta}) and with error exp(-n^{delta}). This tightly matches the best known lower bounds for this circuit class. As a consequence of our result, all the known hardness for LTF circuits has now effectively been translated into pseudorandomness. This brings the extensive effort in the last decade to construct PRGs and deterministic circuit-analysis algorithms for this class to the point where any subsequent improvement would yield breakthrough lower bounds.

A key ingredient in our construction is a pseudorandom restriction procedure that has tiny failure probability, but simplifies the function to a non-natural "hybrid computational model" that combines decision trees and LTFs. As part of our proof we also construct an "extremely low-error" PRG for the class of functions computable by an arbitrary function of s linear threshold functions that can handle even the extreme setting of parameters s = n/polylog(n) and epsilon = exp(-n/polylog(n)).

Joint work with Pooya Hatami, Avishay Tal, and Roei Tell.

[Anchor] 04/22/2022
Sushant Sachdeva, University of Toronto
Almost Linear Time Algorithms for Max-Flow and More
***There will be an additional 15-minute optional portion of the talk immediately afterwards in order to fully convey the main ideas of the work.***

We give the first almost-linear time algorithm for computing exact maximum flows and minimum-cost flows on directed graphs. By well-known reductions, this implies almost-linear time algorithms for several problems including bipartite matching, optimal transport, and undirected vertex connectivity.

Our algorithm is designed using a new Interior Point Method (IPM) that builds the flow as a sequence of almost-linear number of approximate undirected minimum-ratio cycles, each of which is computed and processed very efficiently using a new dynamic data structure.

Our framework extends to give an almost-linear time algorithm for computing flows that minimize general edge-separable convex functions to high accuracy. This gives the first almost-linear time algorithm for several problems including entropy-regularized optimal transport, matrix scaling, p-norm flows, and Isotonic regression.

Joint work with Li Chen, Rasmus Kyng, Yang Liu, Richard Peng, and Maximilian Probst Gutenberg.

[Anchor] 04/29/2022
John Kallaugher, Sandia National Laboratories
A Quantum Advantage for a Natural Streaming Problem
**HYBRID EVENT in GDC 4.304 - Event will begin at 2:00PM.**

Data streaming, in which a large dataset is received as a "stream" of updates, is an important model in the study of space-bounded computation. Starting with the work of Le Gall [SPAA `06], it has been known that quantum streaming algorithms can use asymptotically less space than their classical counterparts for certain problems. However, so far, all known examples of quantum advantages in streaming are for problems that are either specially constructed for that purpose, or require many streaming passes over the input.

We give a one-pass quantum streaming algorithm for one of the best studied problems in classical graph streaming - the triangle counting problem. Almost-tight parametrized upper and lower bounds are known for this problem in the classical setting; our algorithm uses polynomially less space in certain regions of the parameter space, resolving a question posed by Jain and Nayak in 2014 on achieving quantum advantages for natural streaming problems.

[Anchor] 05/06/2022
Dean Doron, Ben-Gurion University
Approximating Large Powers of Stochastic Matrices in Small Space

We give a deterministic space-efficient algorithm for approximating powers of stochastic matrices. On input a w×w stochastic matrix A, our algorithm approximates An in space O(log n + √log n · log w) to within high accuracy. This improves upon the seminal work by Saks and Zhou, that requires O(log3/2n + √log n·log w) space, in the regime n ≫ w.