Note: The conference will not be held on October 18-19, 2025 as originally planned, since Austin will host the US Formula 1 Grand Prix at that time.
Keynote
Talks:
· Russell Impagliazzo (UCSD) Using High-Dimensional Expanders for Hardness Amplification
· Salil Vadhan (Harvard) Singular-Value Sparsification of Graphs and BPL vs. L
· Avi Wigderson (IAS) The Value
of Errors in Proofs - the fascinating journey from Turing's 1936 R ≠ RE to the 2020 breakthrough of MIP* = RE
(remote)
· Raghu Meka (UCLA) Structure vs Randomness 2.0: Recent Progress in Additive Combinatorics and Communication Complexity
Tentative
Program:
8:30-9:30 Breakfast
(breakfast tacos, yogurt and toppings, pastries, coffee and tea...)
9:30-10:30 Salil Vadhan Singular-Value Sparsification of
Graphs and BPL vs. L
10:30-11 Break
11-12 Russell
Impagliazzo Using
High-Dimensional Expanders for Hardness Amplification
12-2 Lunch
(on your own)
2-3 Raghu
Meka Spreadness and Mixing:
A New Structure vs. Randomness Refinement
3-3:30 Coffee
break
3:30-4 Jesse
Goodman Randomness extractors
4-4:30 Madhur Tulsiani
4:30-5:30 A break and mini-talks
(Chair: William Hoza)
6
Reception
(at Dana and Scott’s home, one mile from the computer science building)
8:30-9:30 Breakfast
(similar to Saturday’s breakfast)
9:30-10 Leonard Schulman
The
Rate-Immediacy Barrier in Explicit Tree Code Constructions
10-10:30 Venkat Guruswami Redundancy is all you need (for CSP sparsification)
10:30-11 Roei Tell Targeted
Samplers
11-11:30 Break
11:30-12:30 Avi Wigderson The Value
of Errors in Proofs - the fascinating journey from Turing's 1936 R ≠ RE to the 2020 breakthrough of MIP* = RE
(remote)
Organizing
Committee:
Eshan Chattopadhyay (Cornell), Dean Doron (Ben-Gurion University), Jesse Goodman (UT Austin), William Hoza (Chicago), Xin Li (Johns Hopkins), Raghu Meka (UCLA), Dana Moshkovitz (UT Austin), Anup Rao (UW).
More
(Non-Local) Confirmed Participants:
Lijie
Chen (Berkeley), Xue Chen (UST China), Kuan Cheng
(Peking), Mahdi Cheraghchi
(Michigan), Yevgeniy
Dodis (NYU), Parikshit Gopalan (Apple), Zeyu Guo (Ohio), Venkat
Guruswami (Berkeley), Chin Ho Lee (NCSU), Ted Pyne
(MIT), Hanlin
Ren (IAS), Leonard
Schulman (Caltech), Roei Tell (Toronto), Avishay Tal (Berkeley), Madhur Tulsiani (TTIC), Chris Umans
(Caltech).
Registration:
The workshop is free and open to everyone! For planning purposes,
if you intend to come, please fill out your name and email address here.
Local
Information:
General information about visiting UT Austin is available here. The
closest parking garage to the workshop is the San
Jacinto Parking Garage (not free, open to the public). If you Uber, use the
Peter
O’Donnell Building as the destination. This building is seamlessly
connected to the computer science building Gates Dell Complex where the
workshop is.
There are many lunch options nearby, including: Cava, Taco Joint (closed on Sundays),
Kerbey
Lane Café, Ramen Tatsu-ya,
Potbelly
(sandwiches), Sweetgreen (salads), 1914 Noodles and other Asian
restaurants adjacent to it.
Abstracts:
Title: Singular-Value Sparsification of
Graphs and BPL vs. L
Speaker: Salil Vadhan, Harvard University
Abstract: In an effort to make progress on derandomizing space-bounded computation, we and others
formulated a new, stronger notion of spectral approximation for both undirected
and directed graphs, called singular-value approximation. In this talk, I will
describe this notion, its many nice properties, and its applications in
space-bounded derandomization, as well as some open
problems and potential future directions.
Joint work with AmirMahdi
Ahmadenijad, John Peebles, Ted Pyne,
and Aaron Sidford.
--
Title: Spreadness and Mixing: A New Strcture
vs Randomness Refinement
Speaker: Raghu Meka, The University of California, Los Angeles
Abstract: The
classical structure vs. randomness paradigm decomposes an object into a
structured component and a random-looking remainder. This paradigm has fueled
breakthroughs from Roth and Szemerédi to
modern regularity methods. Recent works introduced a refined viewpoint based on
"spreadness" and "mixing".
This perspective has led to
exponential improvements for several problems, including: (i)
upper bounds for 3-term arithmetic-progression-free sets, (ii) explicit
separations of deterministic and randomized communication in the three-player
Number-On-Forehead model, and (iii) a spread regularity lemma with algorithmic
consequences for triangle detection and Boolean matrix multiplication.
I will survey these developments.
Based on joint works with Amir abboud, Nick Fischer,
Zander Kelley, Shachar Lovett.
--
Title: Randomness extractors
Speaker: Jesse Goodman, The University of Texas at Austin
Abstract: Given a sequence of N
independent sources of
In this talk, I will discuss our new work that achieves K = 3.
Our
key ingredient is a near-optimal explicit construction of a powerful new
pseudorandom primitive, called a “leakage-resilient extractor (LRE)
against number-on-forehead (NOF) protocols.” Our LRE can be viewed as a
significantly more robust version of classical independent
source extractors, and resolves an open question by Kumar, Meka, and Sahai (FOCS’19)
and Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, and Zuckerman (FOCS’20). Our LRE construction is
based on a simple new connection we discover between multiparty communication
complexity and "non-malleable" extractors, which shows that
such extractors exhibit strong average-case lower bounds against NOF
protocols.
Based on joint work with Eshan
Chattopadhyay.
--
Title:
Redundancy is all you need (for CSP sparsification)
Speaker: Venkat Guruswami, UC
Berkeley
Abstract: David
Zuckerman has taught us to love (min)-entropy, and how to extract it wherever
it may be hidden within a weak random source. In honor of David and
entropy, I will describe a recent result with Josh Brakensiek
showing that every constraint satisfaction problem (CSP) can be sparsified down to (within polylog
factors of) the size of a maximum non-redundant instance of that CSP.
A non-redundant instance of a CSP admits, for each constraint, an
assignment satisfying only that constraint, making such instances
obvious barriers for sparsification (e.g., for cut sparsification in graphs, spanning trees are non-redundant
instances).
Our sparsification
result is established in the very general context of set families, giving a
VC-like theorem for multiplicative error. It unifies and generalizes prior sparsification results for graph cuts, linear codes, and
various CSP classes. A key ingredient in the proof is an application of
the beautiful entropic inequality underlying Gilmer's recent breakthrough on
the union-closed sets conjecture.
As a consequence of our main theorem, which precisely ties the sparsifiability of a CSP to its non-redundancy (NRD), a
number of results in the NRD literature immediately extend to CSP sparsification. We also contribute new results towards
understanding the rich landscape of NRD. For example, in recent work with Brakensiek, Jansen, Lagerkvist and Wahlstrom,
we demonstrate for every rational r \ge 1, an example
CSP with NRD growing as Theta(n^r). We also
demonstrate a CSP that has O(n) NRD for reasons beyond abelian group embeddings.
--
Title:
Targeted Samplers
Speaker: Roei Tell, University of Toronto
Abstract: How can
we reduce the error of a probabilistic algorithm? A standard approach uses
samplers (equivalently, extractors), which sample any event approximately
correctly, whp. The observation underlying this talk
is that samplers are an overkill for error-reduction, since we only want to
approximately sample *one* event (specified by an algorithm and an input), and
this event's description is explicitly given to us.
I'll present a notion of Targeted
Samplers, which are suitable for this purpose, and describe some potentials and
limitations of this notion. This seems especially appropriate for celebrating
David's birthday, since the notion belongs two of David's lines of work:
constructing samplers and extractors, and the quest for highly efficient derandomization. The talk is based on a work in preparation
with Heda Chen, Dean Doron,
and Mohammad Hossein Ebtehaj.
--
Title: The Value of Errors in Proofs - the fascinating journey
from Turing's 1936 R ≠ RE to the 2020 breakthrough of MIP* = RE
Speaker: Avi Wigderson,
Institute for Advanced Study, Princeton.
Abstract:
In 2020, a group of theoretical computer scientists posted a paper on the Arxiv with the strange-looking title "MIP* = RE",
impacting and surprising not only computational complexity theory but also some
areas of math and physics. Specifically, it resolved, in the negative, the
"Connes' embedding conjecture" in the area
of von-Neumann algebras, and the "Tsirelson
problem" in quantum information theory. You can find the paper here https://arxiv.org/abs/2001.
As it happens, both acronyms MIP* and RE represent proof
systems, of a very different nature. To explain them, we'll take a meandering
journey through the classical and modern definitions of proof. I hope to
explain how the methodology of computational complexity theory, especially
modeling and classification (both problems and proofs) by algorithmic
efficiency, naturally leads to the generation of new such notions and results (and
more acronyms, like NP). A special focus will be on notions of proof which
allow interaction, randomness, and errors, and their surprising power and
magical properties.
The talk will be non-technical, and requires no special
background.