Note: The conference will not be held on October 18-19, 2025 as originally planned, since Austin will host the US Formula 1 Grand Prix at that time.
Keynote
Talks:
· Russell Impagliazzo (UCSD) Using High-Dimensional Expanders for Hardness Amplification
· Salil Vadhan (Harvard) Applications of Multicalibration in Complexity and Cryptography
· Avi Wigderson
(IAS) The Value of Errors in Proofs - the fascinating journey from Turing's
1936 R ≠ RE to the 2020 breakthrough of
MIP* = RE (remote)
· Raghu Meka (UCLA) Structure vs Randomness 2.0: Recent Progress in Additive Combinatorics and Communication Complexity
Program:
8:30-9:30 Breakfast
(breakfast tacos, yogurt and toppings, pastries, coffee and tea...)
9:30-10:30 Salil Vadhan Applications of Multicalibration in Complexity and Cryptography
10:30-11 Break
11-12 Russell
Impagliazzo Using High-Dimensional Expanders for Hardness
Amplification
12-2 Lunch
(on your own)
2-3 Raghu
Meka Spreadness and Mixing: A New Structure vs.
Randomness Refinement (remote)
3-3:30 Coffee
break
3:30-4 Jesse
Goodman Randomness extractors
4-4:30 Madhur
Tulsiani List Decoding Expander-Based Codes up to
Capacity in Near-Linear Time
4:30-5:30 A break and mini-talks
(Chair: William Hoza)
6
Reception
(at Dana and Scott’s home, one mile from the computer science building)
8:30-9:30 Breakfast
(similar to Saturday’s breakfast)
9:30-10 Leonard Schulman
The Rate-Immediacy Barrier in Explicit Tree
Code Constructions
10-10:30 Venkat
Guruswami Redundancy is all you need (for CSP
sparsification)
10:30-11 Roei
Tell Targeted Samplers
11-11:30 Break
11:30-12:30 Avi
Wigderson The Value of Errors in Proofs - the
fascinating journey from Turing's 1936 R ≠ RE to the 2020 breakthrough of
MIP* = RE (remote)
Organizing
Committee:
Eshan Chattopadhyay (Cornell), Dean Doron (Ben-Gurion University), Jesse Goodman (UT Austin), William Hoza (Chicago), Xin Li (Johns Hopkins), Raghu Meka (UCLA), Dana Moshkovitz (UT Austin), Anup Rao (UW).
More
(Non-Local) Confirmed Participants:
Lijie Chen (Berkeley),
Xue Chen (UST China), Kuan Cheng (Peking), Mahdi Cheraghchi (Michigan), Parikshit Gopalan (Apple), Zeyu Guo (Ohio), Venkat Guruswami (Berkeley),
Chin Ho Lee (NCSU), Ted Pyne (MIT), Hanlin Ren (IAS), Leonard Schulman (Caltech),
Roei Tell (Toronto), Avishay Tal (Berkeley), Madhur Tulsiani (TTIC), Chris Umans (Caltech).
Registration:
The workshop is free and open to everyone! For planning
purposes, if you intend to come, please fill out your name and email address here.
Local
Information:
The workshop will be in the computer science building Gates
Dell Complex (GDC) of UT Austin. The closest parking garage to the workshop
is the San
Jacinto Parking Garage (not free, open to the public). If you Uber, use the
Peter
O’Donnell Building (POB) as the destination. During the weekend POB
will be locked, but it is adjacent to GDC.
Lunch. There are many lunch options in walking
distance, including: Cava,
Taco Joint (closed on
Sundays), Kerbey Lane
Café (diner), Ramen
Tatsu-ya, Potbelly
(sandwiches), Sweetgreen
(salads), 1914 Noodles
and other Asian restaurants adjacent to it, Madam Mam (Thai). Farther
(short Uber or longer walk) are Black’s BBQ, Kabobzi (Mediterranean), Sip Pho (Vietnamese), Tacodeli (Tex Mex), Clay
Pit (modern Indian), Texas Chili Parlor,
and many others.
General information about visiting UT Austin (airport, hotels,
recommendations) is available here
(Dana’s selection).
Abstracts:
Title: Applications of Multicalibration in Complexity and Cryptography
Speaker: Salil Vadhan, Harvard University
Abstract: In this talk, I will describe
how Multicalibration, a new concept arising from the algorithmic fairness
literature, is a powerful tool for reasoning about average-case complexity and
computational indistinguishability, with applications to security proofs in
cryptography. Specifically, the Multicalibration Theorem of
[HébertJohnson-Kim-Reingold-Rothblum `18] asserts that every boolean
function g, no matter how complex, is "indistinguishable" from a
"simple" randomized function. Specifically, there is a
"low-complexity" partition of the domain of g into a small number of
pieces such that on almost every piece P_i, if we choose an input X uniformly
at random from P_i, (X,g(X)) is computationally indistinguishable from
(X,Bernoulli(p_i)), where p_i is the expectation of g on P_i. As shown by
[Dwork-Lee-Lin-Tankala `23], this is a complexity-theoretic analogue of
Szemeredi's Regularity Lemma in graph theory, which partitions the vertex set
of every graph G into a small number of pieces P_i, such that on almost all
pairs P_i x P_j, the graph G is, in a certain sense, indistinguishable from a
random bipartite graph with edge density matching that of G on P_i x P_j.
The Multicalibration Theorem allows us to reduce many
questions about computational hardness and computational indistinguishability
to their information-theoretic analogues. Thus, it can be viewed as a
qualitative strengthening of several complexity-theoretic results that were
already known to have many applications to security proofs in cryptography,
such as Impagliazzo's Hardcore Lemma [Impagliazzo `95, Holenstein `06], the
Complexity-Theoretic Dense Model Theorem
[Reingold-Trevisan-Tulsiani-Vadhan `08], and the Weak Complexity-Theoretic
Regularity/Leakage Simulation Lemma of [Trevisan-Tulsiani-Vadhan `09,
Jetchev-Pietrzak `14]. In particular, we show that these latter results all
follow easily as corollaries of the Multicalibration Theorem.
Furthermore, we also use it to obtain new results characterizing how many
samples are required to efficiently distinguish two distributions X and Y in
terms of their "pseudo-Hellinger-distance" (or the
"pseudo-Renyi-1/2 entropy" of X in case Y is uniform).
Joint works with Sílvia Casacuberta and Cynthia Dwork,
with Cassandra Marcussen and Louie Putterman, and with Lunjia Hu.
--
Title:
Spreadness and Mixing: A New Strcture vs Randomness Refinement
Speaker: Raghu
Meka, The University of California, Los Angeles
Abstract: The
classical structure vs. randomness paradigm decomposes an object into a
structured component and a random-looking remainder. This paradigm has fueled
breakthroughs from Roth and Szemerédi to modern regularity methods.
Recent works introduced a refined viewpoint based on "spreadness" and
"mixing".
This perspective has led to
exponential improvements for several problems, including: (i) upper bounds for
3-term arithmetic-progression-free sets, (ii) explicit separations of
deterministic and randomized communication in the three-player
Number-On-Forehead model, and (iii) a spread regularity lemma with algorithmic
consequences for triangle detection and Boolean matrix multiplication.
I will survey these developments.
Based on joint works with Amir abboud, Nick Fischer, Zander Kelley, Shachar
Lovett.
--
Title: Randomness extractors
Speaker: Jesse Goodman, The University of Texas at Austin
Abstract: Given a sequence of N independent sources of
In this talk, I will discuss our new work that achieves K = 3.
Our
key ingredient is a near-optimal explicit construction of a powerful new
pseudorandom primitive, called a “leakage-resilient extractor (LRE)
against number-on-forehead (NOF) protocols.” Our LRE can be viewed as a
significantly more robust version of classical independent
source extractors, and resolves an open question by Kumar, Meka, and Sahai
(FOCS’19) and Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, and
Zuckerman (FOCS’20). Our LRE construction is based on a simple new
connection we discover between multiparty communication complexity and
"non-malleable" extractors, which shows that
such extractors exhibit strong average-case lower bounds against NOF
protocols.
Based on joint work with Eshan Chattopadhyay.
--
Title: List
Decoding Expander-Based Codes up to Capacity in Near-Linear Time
Speaker: Madhur Tulsiani, TTIC
Abstract:
We will talk about a new
framework based on graph regularity lemmas, for list decoding and list recovery
of codes based on spectral expanders. These constructions often proceed by
showing that the distance of local constant-sized codes can be lifted to a
distance lower bound for the global code. The regularity framework allows us to
similarly lift list size bounds for local base codes, to list size bounds for
the global codes.
This allows us to obtain novel
combinatorial bounds for list decoding and list recovery up to optimal radius,
for Tanner codes of Sipser and Spielman, and for the distance amplification
scheme of Alon, Edmonds, and Luby. Further, using existing algorithms for
computing weak-regularity decompositions of sparse graphs in (randomized)
near-linear time, these tasks can be performed in near-linear time.
--
Title:
Redundancy is all you need (for CSP sparsification)
Speaker: Venkat
Guruswami, UC Berkeley
Abstract: David
Zuckerman has taught us to love (min)-entropy, and how to extract it wherever
it may be hidden within a weak random source. In honor of David and entropy,
I will describe a recent result with Josh Brakensiek showing that every
constraint satisfaction problem (CSP) can be sparsified down to (within polylog
factors of) the size of a maximum non-redundant instance of that CSP.
A non-redundant instance of a CSP admits, for each constraint, an
assignment satisfying only that constraint, making such instances
obvious barriers for sparsification (e.g., for cut sparsification in graphs,
spanning trees are non-redundant instances).
Our sparsification result is
established in the very general context of set families, giving a VC-like
theorem for multiplicative error. It unifies and generalizes prior
sparsification results for graph cuts, linear codes, and various CSP
classes. A key ingredient in the proof is an application of the beautiful
entropic inequality underlying Gilmer's recent breakthrough on the union-closed
sets conjecture.
As a consequence of our main theorem, which precisely ties the sparsifiability
of a CSP to its non-redundancy (NRD), a number of results in the NRD literature
immediately extend to CSP sparsification. We also contribute new results
towards understanding the rich landscape of NRD. For example, in recent work
with Brakensiek, Jansen, Lagerkvist and Wahlstrom, we demonstrate for every rational
r \ge 1, an example CSP with NRD growing as Theta(n^r). We also demonstrate a
CSP that has O(n) NRD for reasons beyond abelian group embeddings.
--
Title:
Targeted Samplers
Speaker: Roei
Tell, University of Toronto
Abstract: How can
we reduce the error of a probabilistic algorithm? A standard approach uses
samplers (equivalently, extractors), which sample any event approximately
correctly, whp. The observation underlying this talk is that samplers are an
overkill for error-reduction, since we only want to approximately sample *one*
event (specified by an algorithm and an input), and this event's description is
explicitly given to us.
I'll present a notion of Targeted
Samplers, which are suitable for this purpose, and describe some potentials and
limitations of this notion. This seems especially appropriate for celebrating
David's birthday, since the notion belongs two of David's lines of work:
constructing samplers and extractors, and the quest for highly efficient
derandomization. The talk is based on a work in preparation with Heda Chen,
Dean Doron, and Mohammad Hossein Ebtehaj.
--
Title: The Value of Errors in Proofs - the fascinating journey
from Turing's 1936 R ≠ RE to the 2020 breakthrough of MIP* = RE
Speaker: Avi Wigderson,
Institute for Advanced Study, Princeton.
Abstract:
In 2020, a group of theoretical computer scientists posted a paper on the Arxiv
with the strange-looking title "MIP* = RE", impacting and surprising
not only computational complexity theory but also some areas of math and physics.
Specifically, it resolved, in the negative, the "Connes' embedding
conjecture" in the area of von-Neumann algebras, and the "Tsirelson
problem" in quantum information theory. You can find the paper here https://arxiv.org/abs/2001.
As it happens, both acronyms MIP* and RE represent proof
systems, of a very different nature. To explain them, we'll take a meandering journey
through the classical and modern definitions of proof. I hope to explain how
the methodology of computational complexity theory, especially modeling and
classification (both problems and proofs) by algorithmic efficiency, naturally
leads to the generation of new such notions and results (and more acronyms,
like NP). A special focus will be on notions of proof which allow interaction,
randomness, and errors, and their surprising power and magical
properties.
The talk will be non-technical, and requires no special
background.