### Spring 2017 Schedule

For Spring 2017, the Theory Seminar will meet on Fridays from 2:00-3:00pm in GDC 4.304 unless otherwise noted. This schedule is being updated throughout the semester.

02/03/17 - Bill Fefferman, University of Maryland: "A Complete Characterization of Unitary Quantum Space"
02/10/17 - Eric Price, University of Texas at Austin: " Fourier Sparsity, Polynomials, and Fast Interpolation"
02/17/17 - Aleksander Mądry, MIT: "Continuous Optimization: The "Right" Language for Graph Algorithms?"
02/23/17 - Jing Chen, Stony Brook University: "From Bayesian to Crowdsourced Bayesian Auctions" *Thursday, February 23 at 10:00am in GDC 6.302*
03/03/17 - Elad Hazan, Princeton University: "A Non-generative Framework and Convex Relaxations for Unsupervised Learning"
03/10/17 - Julia Chuzhoy, Toyota Technological Institute at Chicago, University of Chicago: "New Hardness Results for Routing on Disjoint Paths"
03/17/17 - (Canceled)
03/24/17 - (Canceled)
03/31/17 - Adam Klivans, University of Texas at Austin: "Reliably Learning the ReLU in polynomial time"
04/07/17 - Seth Pettie, University of Michigan, Ann Arbor: "Optimal Compression of Graph Metrics"
04/14/17 - Etienne Vouga, University of Texas at Austin: "All's Well That Ends Well: Guaranteed Resolution of Simultaneous Rigid Body Impact"
04/21/17 - (Canceled)
04/28/17 - Yin Tat Lee, University of Washington: "Eldan's Stochastic Localization and the KLS Hyperplane Conjecture: An Improved Lower Bound for Expansion"
05/05/17 - Adam Bouland, MIT: "On The Power of Statistical Zero Knowledge"
05/12/17 - (Finals Week - No seminar)
05/19/17 - Shelby Kimmel, University of Maryland: "Path Detection: A Quantum Computing Primitive"

### Fall 2016 Schedule

During Fall 2016, the Theory Seminar met on Fridays from 2:00-3:00pm in GDC 4.304.

08/26/16 - Scott Aaronson, University of Texas at Austin: "The Complexity and Computability Theory of Time Travel"
09/02/16 - Ridwan Syed: "Proving Weak Approximability Without Algorithms" & John Kallaugher: "Improved Graph Sampling for Triangle Counting" University of Texas at Austin
09/09/16 - (Canceled)
09/16/16 - Dana Moshkovitz, University of Texas at Austin: "Amplification and Derandomization Without Slowdown"
09/30/16 - Evdokia Nikolova, University of Texas at Austin: "Algorithms for Risk Mitigation in Networks"
10/07/16 - David Soloveichik, University of Texas at Austin: "Agents and Reagents: Distributed Computing in a Test Tube"
10/14/16 - Brent Waters, University of Texas at Austin: "How to Use Indistinguishability Obfuscation: Deniable Encryption, and More"
10/21/16 - Richard Peng, Georgia Tech:"Almost-Linear Time Algorithms for Directed Spectral Sparsification, Computing Stationary Distributions, Simulating Directed Random Walks, and more"
10/24/16 - Josh Grochow, Santa Fe Institute: "Wildness at the heart of complexity" *Monday, October 24 at 11:00am in GDC 6.302*
10/26/16 - Boaz Barak, Harvard University: "Computational Bayesianism, sums of squares, and unicorns" *Wednesday, October 26 at 4:00pm in GDC 6.202*
11/04/16 - Ronald de Wolf, University of Amsterdam: "On linear and semidefinite programs for polytopes in combinatorial optimization"
11/11/16 - Steve Fenner, University of South Carolina: "Bipartite Perfect Matching is in quasi-NC"
11/18/16 - Yuval Ishai, Israel Institute of Technology: "Succinct Secure Computation Without Fully Homomorphic Encryption"
11/25/16 - (Canceled)
11/30/16 - Gil Cohen, California Institute of Technology: "Polynomials, cubes of primes, and lattices"  *Wednesday, November 30 at 11:00am in GDC 4.304*
12/02/16 - Thomas Vidick, California Institute of Technology: "Overlapping Qubits"

### August 26, 2016Scott Aaronson, University of Texas at Austin The Complexity and Computability Theory of Time Travel

Abstract: In a seminal 1991 paper, David Deutsch proposed a formal model of closed timelike curves (CTCs), or time travel into the past, which used linear algebra to "resolve the grandfather paradox." In 2008, John Watrous and I showed that, under Deutsch's model, both a classical computer and a quantum computer with access to a polynomial-size CTC could solve exactly the problems in PSPACE. A different model of CTCs, based on postselection, was shown by Lloyd et al. to yield the complexity class PP in the quantum case and BPP_path in the classical case.

In this talk, I'll review these results and then give a brand-new extension to the setting of computability theory. Namely, I'll show that a classical or quantum computer with access to a Deutschian CTC (with no bound on its size) could solve exactly the problems that are Turing-reducible to the halting problem. Also, a classical or quantum computer with access to a postselected CTC could solve exactly the problems that are weakly truth-table reducible to the halting problem. Just like in the complexity setting, the most technically interesting part is the upper bound on the power of quantum computers with access to a Deutschian CTC.
Joint work with Mohammad Bavarian and Giulio Gueltrini.

### September 2, 2016 Ridwan Syed, University of Texas at Austin Proving Weak Approximability Without Algorithms

Abstract: A boolean predicate f is said to be strongly approximation resistant if, given a near-satisfiable instance of MAX k-CSP ( f ) , it is hard to find an assignment such that the fraction of constraints satisfied is significantly deviates from the performance of a random assignment . A predicate which is not strongly approximation resistant is known as weakly approximable.

We give a new method for proving the weak approximability of predicates, using a simple SDP relaxation, without designing and analyzing new rounding algorithms for each predicate. Instead, we use the recent characterization of strong approximation resistance by Khot et al., and show how to prove that for a given predicate f , certain necessary conditions for strong resistance derived from their characterization, are violated. By their result, this implies the existence of a good rounding algorithm, proving weak approximability.

We show how this method can be used to obtain simple proofs of (weak approximability analogues of) various known results on approximability, as well as new results on weak approximability of symmetric predicates.

### John Kallaugher, University of Texas at Austin Improved Graph Sampling for Triangle Counting

Abstract: We present a new algorithm for the triangle counting problem. Estimating the number of triangles in a stream of edges is a core problem in graph sketching, which has been extensively studied over the last few years. For all graphs, our algorithm matches (up to log factors) or improves on the performance of existing algorithms in the turnstile model, and it is optimal up to log factors for certain natural classes of graph.

Additionally, we present a lower bound covering all graphs, and matching the performance of our algorithm up log factors. This lower bound applies to "triangle-dependent" sampling algorithms, a class of sampling algorithms which includes our algorithm and all previous sampling algorithms for this problem.

### September 16, 2016Dana Moshkovitz, University of Texas at Austin Proving Weak Approximability Without Algorithms

Abstract: We present techniques for decreasing the error probability of randomized algorithms and for converting randomized algorithms to deterministic (non-uniform) algorithms.

Unlike most existing techniques that involve repetition of the randomized algorithm and hence a slowdown, our techniques produce algorithms with a similar run-time to the original randomized algorithms. The amplification technique is related to a certain stochastic multi-armed bandit problem.
The derandomization technique - which is the main contribution of this work - points to an intriguing connection between derandomization and sketching/sparsification.

We demonstrate the techniques by showing applications to Max-Cut on dense graphs, approximate clique on graphs that contain a large clique, constraint satisfaction problems on dense bipartite graphs and the list decoding to unique decoding problem for the Reed-Muller code.

### September 23, 2016Moritz Hardt, Google Gradient Descent Learns Linear Dynamical Systems

Abstract: We prove that gradient descent efficiently converges to the global optimizer of the maximum likelihood objective of an unknown linear time-invariant dynamical system from a sequence of noisy observations generated by the system. Even though objective function is non-convex, we provide polynomial running time and sample complexity bounds under strong but natural assumptions. Linear systems identification has been studied for many decades, yet, to the best of our knowledge, these are the first polynomial guarantees for the problem we consider.

Joint work with Tengyu Ma and Ben Recht.

### October 7, 2016David Soloveichik, University of Texas at Austin Agents and Reagents: Distributed Computing in a Test Tube

Abstract: The engineering of chemical systems capable of computation--in environments inherently incompatible with electronics--will herald exciting breakthroughs: from programmable living cells, to "smart drugs" that make treatment decisions within the body. To understand how chemical reactions can process information we must develop formal models of chemical computing. For a century and a half, chemical reaction networks in various forms have been used to model chemical interactions (i.e., sets of chemical reactions such as A + B -> A + C). Recent advances in molecular engineering have gone beyond naturally occurring reactions, yielding programmable molecules that obey prescribed reaction rules. Understanding and designing computation in such chemical reaction networks is a quintessential distributed computing problem: large numbers of computationally weak molecules interact asynchronously. After a brief tutorial on chemical reaction networks, I will discuss our recent results showing new time lower bounds on some basic computational tasks (such as leader election and division by 2).

### October 14, 2016Brent Waters, University of Texas at Austin How to Use Indistinguishability Obfuscation: Deniable Encryption, and More

Abstract: We introduce a new technique, that we call punctured programs, to apply indistinguishability obfuscation towards cryptographic problems. We use this technique to carry out a systematic study of the applicability of indistinguishability obfuscation to a variety of cryptographic goals. Along the way, we resolve the 16-year-old open question of Deniable Encryption, posed by Canetti, Dwork, Naor, and Ostrovsky in 1997: In deniable encryption, a sender who is forced to reveal to an adversary both her message and the randomness she used for encrypting it should be able to convincingly provide fake randomness that can explain any alternative message that she would like to pretend that she sent. We resolve this question by giving the first construction of deniable encryption that does not require any pre-planning by the party that must later issue a denial.

In addition, we show the generality of our punctured programs technique by also constructing a variety of core cryptographic objects from indistinguishability obfuscation and one-way functions (or close variants). In particular we obtain: public key encryption, short “hash-and-sign” selectively secure signatures, chosen-ciphertext secure public key encryption, non-interactive zero knowledge proofs (NIZKs), injective trapdoor functions, and oblivious transfer. These results suggest the possibility of indistinguishability obfuscation becoming a central hub for cryptography.

### October 21, 2016Richard Peng, Georgia Tech Almost-Linear Time Algorithms for Directed Spectral Sparsification, Computing Stationary Distributions, Simulating Directed Random Walks, and more

Abstract: We provide almost-linear time algorithms for computing various fundamental quantities associated with random walks on directed graphs, including the stationary distribution, personalized PageRank vectors, hitting times between vertices, and escape probabilities. Previous algorithms for these problems either invoke a general purpose linear system solver or depend polynomially on a natural condition number associated with the problem (i.e. the mixing time or restart probability).

Our results are based on efficiently reducing all of these problems to solving polylog directed Laplacian systems of Eulerian graphs, which are directed graphs where all vertices’ indegrees equal to outdegrees. We then extend the notion of spectral approximation to these graphs, and solve linear systems in them via sparsified squaring. We hope these results and our analyses open the door for further study into directed spectral graph theory.

This talk is based on https://arxiv.org/abs/1608.03270 and works in progress joint with Michael B. Cohen, Jon Kelner, John Peebles, Anup B. Rao, Aaron Sidford, and Adrian Vladu.

### *Special time and location* October 24, 2016 at 11:00am in GDC 6.302Josh Grochow, Santa Fe Institute Wildness at the heart of complexity

Abstract: In Boolean complexity, the connections between hard problems (NP- or EXP-hard), derandomization, and seemingly-easy problems (e.g., all-pairs shortest paths) are well-known. In algebraic complexity, there is a close analogue of just one of these connections: between hardness (permanent vs determinant or formula size lower bounds) and derandomization (polynomial identity testing). But in the algebraic world we can go further: there is a single, clean mathematical difficulty at the heart of these problems, as well as at the heart of three other problems: group isomorphism (a close cousin to graph isomorphism), matrix multiplication, and a cryptosystem due (independently) to Shulman and Patarin. This central mathematical difficulty is a classical phenomenon known as wildness. In this talk, I will introduce wildness, survey how it plays a role in the preceding major open problems, mention a few places this viewpoint has led to new results, and speculate on future directions, including a possible connection between wildness and quantum speed-ups. Although wildness originated in representation theory and geometry, this talk will assume no mathematical background beyond linear algebra and basic group theory.

### *Special time and location* October 26, 2016 at 4:00pm in GDC 6.202Boaz Barak, Harvard University Computational Bayesianism, sums of squares, and unicorns

Abstract: Can we make sense of quantities such as "the probability that 2^81712357 - 1 is prime" or "the probability that statement X is a logical contradiction”? More generally, can we use probabilities to quantify our "computational uncertainty" in cases where all the relevant information is given but in a computationally hard-to-extract form?

In this talk we will discuss how such "pseudo probabilities" can arise from the Sum of Squares (SOS) semidefinite program (Parrilo'00, Lasserre'01). We will show how this yields an approach for showing both positive and negative results for the SOS algorithms. In particular we will present better algorithms for the tensor decomposition problem from data analysis, and stronger lower bounds for the planted clique problem.

The talk will be partially based on joint works with Sam Hopkins, Jon Kelner, Pravesh Kothari, Ankur Moitra, Aaron Potechin and David Steurer. I will not assume any prior knowledge on the sum of squares algorithm or semidefinite programming.

### November 4, 2016Ronald de Wolf, University of Amsterdam On linear and semidefinite programs for polytopes in combinatorial optimization

Abstract: Combinatorial problems like TSP optimize a linear function over some polytope P. If we can obtain P as a projection from a larger-dimensional polytope with a small number of facets, then we get a small linear program for the optimization problem if we obtain P as a projection from a small spectrahedron, then we get a small semidefinite program. The area of extension complexity studies the minimum sizes of such LPs and SDPs. In the 1980s Yannakakis was the first to do this, proving exponential lower bounds on the size of symmetric LPs for the TSP and matching polytopes. In 2012, Fiorini et al. proved exponential lower bounds on the size of all (possibly non-symmetric) LPs for TSP. This was followed by many new results for LPs and SDPs, for exact optimization as well as for approximation. We will survey this recent line of work.

### November 11, 2016Steve Fenner, University of South Carolina Bipartite Perfect Matching is in quasi-NC

Abstract: We show that the bipartite perfect matching problem is in quasi-NC. In particular, it has uniform circuits of quasi-polynomial size and O(log^2 n) depth. Previously, only an exponential upper bound was known on the size of such circuits with poly-logarithmic depth. We obtain our result by an almost complete derandomization of the Isolation Lemma of Mulmuley, Vazirani, & Vazirani, which was used to yield an efficient randomized parallel algorithm for the bipartite perfect matching problem.

This is joint work with Rohit Gurjar and Thomas Thierauf.

### November 18, 2016Yuval Ishai, Israel Institute of Technology Succinct Secure Computation Without Fully Homomorphic Encryption

Abstract: Fully homomorphic encryption (FHE) is a powerful cryptographic tool that can be used to minimize the communication complexity of secure computation protocols. However, known FHE schemes rely on a relatively narrow set of assumptions and algebraic structures that are all related to lattices.

We present a new technique for succinct secure computation that replaces FHE by ''homomorphic secret sharing'' and can be based on discrete-log-type assumptions. More concretely, under the Decisional Diffie-Hellman (DDH) assumption, we construct a 2-out-of-2 secret sharing scheme that supports a compact evaluation of branching programs on the shares. We use this to obtain several new DDH-based results, including succinct protocols for secure two-party computation and round-efficient protocols for secure multiparty computation.

Joint work with Elette Boyle and Niv Gilboa

### *Special time and location* Wednesday, November 30 at 11:00am in GDC 4.304 Gil Cohen, California Institute of Technology Polynomials, cubes of primes, and lattices

Abstract: Let f be a non-constant polynomial that maps {0,...,n} to {0,...,n-1}. How low can the degree of f be? In this talk, we give some bounds by studying the behavior of f on a cube of primes, and by relating the problem to the shortest vector problem in lattices.

Joint work with Amir Shpilka and Avishay Tal.
No prior knowledge is assumed.

### December 2, 2016Thomas Vidick, California Institute of Technology Overlapping Qubits

Abstract: An ideal system of n qubits has 2^n dimensions.  This exponential scaling grants quantum computers their power, but also hinders characterizing a system's state and dynamics.  We study a new problem: the qubits in a physical system might not be independent.  They can "overlap," in the sense that an operation on one qubit might slightly affect the others.  We show that, allowing for slight overlaps, n qubits can fit in just polynomially many dimensions.  (Defined in a natural way, all pairwise overlaps can be at most epsilon in n^O(1/\epsilon^2) dimensions.)  Our results show that real systems with only apparently small imperfections could have much less power than one would have expected.  On the other hand, we also provide an efficient test to certify exponential dimensionality.  Unfortunately, the test is sensitive to noise.  It is important to devise more robust tests on the arrangements of qubits in quantum devices.

Joint work with Rui Chao, Chris Sutherland, and Ben Reichardt.

### February 3, 2017Bill Fefferman, University of Maryland A Complete Characterization of Unitary Quantum Space

Abstract: Motivated by trying to understand the power of quantum computation with a small number of qubits, we give two natural complete problems that characterize the power of quantum space-bounded classes. The first is based on the Matrix Inversion problem for well-conditioned matrices. We show that given the size-n efficient encoding of a 2^{O(k(n))} ×2^{O(k(n))} well-conditioned matrix H, approximating a particular entry of H^{−1} is complete for the class of problems solvable by a quantum algorithm that uses O(k(n)) qubits of space and performs all quantum measurements at the end of the computation. In particular, the problem of computing entries of H^{−1} for an explicit well-conditioned n × n matrix H is complete for unitary quantum logspace.

We then show that the problem of approximating to high precision the least eigenvalue of a positive semidefinite matrix H, encoded as a circuit, gives a second characterization of unitary quantum space complexity. In the process we also establish an equivalence between unitary quantum space-bounded classes and certain Quantum Merlin Arthur proof systems. As consequences, we establish that QMA with exponentially small completeness-soundness gap is equal to PSPACE, that determining whether a local Hamiltonian is frustration-free is PSPACE-complete, and give a provable setting in which the ability to prepare PEPS states gives less computational power than the ability to prepare the ground state of a generic local Hamiltonian.

This is joint work with Cedric Lin (QuICS, University of Maryland)

### February 10, 2017Eric Price, University of Texas at Austin Fourier Sparsity, Polynomials, and Fast Interpolation

Abstract:In recent years, a number of works have studied methods for computing the Fourier transform in sublinear time if the output is sparse. Most of these have focused on the discrete setting, even though in many applications the input signal is continuous and naive discretization significantly worsens the sparsity level.

In the continuous setting, if the signal is sampled over a duration T, it is impossible to robustly identify frequencies that lie too close to each other (within 1/T). We show that one can achieve this bound to within a log factor while maintaining constant approximation factor and good sample complexity. We furthermore show that the barrier does not apply to estimating the signal as a whole: we give a sample-efficient algorithm to robustly estimate Fourier-sparse signals, regardless of the frequency gap.

As a special case, we get an algorithm to interpolate degree d polynomials from noisy measurements, using O(d) samples and O~(d) time, while increasing the error by a constant factor in L2.

Joint work with Xue Chen, Daniel Kane, and Zhao Song. Based on https://arxiv.org/abs/1609.00896 and https://arxiv.org/abs/1609.01361 .

### February 17, 2017 Aleksander Mądry, MIT Continuous Optimization: The "Right" Language for Graph Algorithms?

Abstract: Traditionally, we view graphs as purely combinatorial objects and tend to design our graph algorithms to be combinatorial as well. In fact, in the context of algorithms, "combinatorial" became a synonym of "fast".

Recent work, however, shows that a number of such "inherently combinatorial" graph problems can be solved much faster using methods that are very "non-combinatorial". Specifically, by approaching these problems with tools and notions borrowed from linear algebra and, more broadly, from continuous optimization. A notable examples here are the recent lines of work on the maximum flow problem, the bipartite matching problem, and the shortest path problem in graphs with negative-length arcs.

This raises an intriguing question: Is continuous optimization a more suitable and principled optics for fast graph algorithms than the classic combinatorial view?

In this talk, I will discuss this question as well as the developments that motivated it.

### February 23, 2017 Thursday, February 23 at 10:00am in GDC 6.302Jing Chen, Stony Brook University From Bayesian to Crowdsourced Bayesian Auctions

Abstract:In Bayesian mechanism design, a strong assumption is that the distributions of the players' private types are common knowledge to the designer and the players ---the common prior assumption. An important problem that has received a lot of attention in both economics and computer science is to repeatedly weaken this assumption. In this work we consider, for the first time, multi-item auctions where the knowledge about the players' value distributions is arbitrarily scattered among the players and the seller. Each one of them privately knows some of the value distributions or a refinement of them, no constraint is imposed on who knows which distributions, and the seller does not know who knows what. In such an unstructured information setting, we design mechanisms for unit-demand auctions and additive auctions, whose expected revenue approximates that of the optimal Bayesian mechanisms by “crowdsourcing” the players' and the seller's knowledge. In some sense, our results show that the common prior assumption is without much loss of generality in Bayesian auctions if one is willing to give up a fraction of the revenue, and this fraction shrinks gracefully as the amount of knowledge in the system increases.

### March 3, 2017Elad Hazan, Princeton University A Non-generative Framework and Convex Relaxations for Unsupervised Learning

Abstract: We'll describe a novel theoretical framework for unsupervised learning which is not based on generative assumptions. It is comparative, and allows to avoid known computational hardness results and improper algorithms based on convex relaxations. We show how several families of unsupervised learning models, which were previously only analyzed under probabilistic assumptions and are otherwise provably intractable, can be efficiently learned in our framework by convex optimization. These includes dictionary learning and learning of algebraic manifolds.

Joint work with Tengyu Ma

### March 10, 2017Julia Chuzhoy, Toyota Technological Institute at Chicago, University of Chicago New Hardness Results for Routing on Disjoint Paths

Abstract:In the classical Node-Disjoint Paths (NDP) problem, the input consists of an undirected $n$-vertex graph $G$, and a collection $M=\{(s_1,t_1),\ldots,(s_k,t_k)\}$ of pairs of its vertices, called source-destination, or demand, pairs. The goal is to route the largest possible number of the demand pairs via node-disjoint paths. The best current approximation for the problem is achieved by a simple greedy algorithm, whose approximation factor is $O(\sqrt n)$, while the best current negative result is a roughly $\Omega(\log^{1/2}n)$-hardness of approximation. Even seemingly simple special cases of the problem are still poorly understood: when the input graph is a grid, the best current algorithm achieves a $\tilde{O}(n^{1/4})$-approximation, and when it is a general planar graph, the best current approximation ratio of an efficient algorithm is $\tilde{O}(n^{9/19})$. The best currently known lower bound for both these versions of the problem is APX-hardness.

In this talk we will show that NDP is $2^{\Omega(\log n)}$-hard to approximate, unless all problems in NP have algorithms with running time $n^{O(\log n)}$. Our result holds even when the underlying graph is a planar graph with maximum vertex degree 3, and all source vertices lie on the boundary of a single face. We extend this result to the closely related Edge-Disjoint Paths problem, showing the same hardness of approximation ratio even for sub-cubic planar graphs with all sources lying on the boundary of a single face.

This is joint work with David H.K. Kim and Rachit Nimavat.

### March 31, 2017Adam Klivans, University of Texas at Austin Reliably Learning the ReLU in polynomial time

Abstract: We give the first dimension-efficient algorithms for learning Rectified Linear Units (ReLUs), which are functions of the form max(0, w.x) with w a unit vector (2-norm equal to 1). Our algorithm works in the challenging Reliable Agnostic learning model of Kalai, Kanade, and Mansour where the learner is given access to a distribution D on labeled examples, but the labeling may be arbitrary. We construct a hypothesis that simultaneously minimizes the false-positive rate and the l_p loss of inputs given positive labels by D.

It runs in polynomial-time (in n) with respect to {\em any} distribution on S^{n-1} (the unit sphere in n dimensions) and for any error parameter \epsilon = \Omega(1/ \log n). These results are in contrast to known efficient algorithms for reliably learning linear threshold functions, where epsilon must be \Omega(1) and strong assumptions are required on the marginal distribution. We can compose our results to obtain the first set of efficient algorithms for learning constant-depth networks of ReLUs.

Surprisingly, the following question remains open: does there exists a fully polynomial-time algorithm (polynomial in the dimension and accuracy parameter) for learning even a single ReLU.

Joint work with Surbhi Goel, Varun Kanade, and Justin Thaler

### April 7, 2017Seth Pettie, University of Michigan, Ann Arbor Optimal Compression of Graph Metrics

Abstract: Every undirected, unweighted graph G=(V,E) defines a graph metric (V,d) where d is the distance function w.r.t. G. Graph spanners, emulators, and distance oracles can all be viewed as "lossy" compression schemes that take a (dense) undirected graph G and produce a representation of an approximation (V,d') to the true metric (V,d).

What does an information-theoretically optimal representation of (V,d') look like? In particular, given a budget of n^{1+eps} bits, how much must d' differ from d, as a function of eps?

In this talk I will survey the history of graph compression. The talk will cover constructions of multiplicative spanners, additive spanners, sublinear additive emulators, and a recent lower bound due to Abboud, Bodwin, and Pettie (SODA 2017) that mostly closes the problem.

### April 14, 2017Etienne Vouga, University of Texas at Austin All's Well That Ends Well: Guaranteed Resolution of Simultaneous Rigid Body Impact

Abstract:When multiple objects are touching, resolving collisions between some of them often creates new collisions between others. For example, in a perfect head-on pool break, the cue ball first collides (only) with the ball at the tip of the pyramid, but a resolution of this collision in isolation induces new collisions against the two balls on the next level of the pyramid, and so forth. For this reason, the usual approach for resolving impacts involving multiple objects is to adopt an iterative strategy: a rule is chosen for how to modify the velocities of objects to fix some subset of the collisions. Applying the rule fixes some collisions while perhaps causing others; the rule is thus applied again, repeatedly, until all collisions are resolved.

Unfortunately, these algorithms have lacked formal guarantees of termination, and have been viewed as potentially dangerous, so failsafes are used in practical codes to prevent infinite loops. I will show such steps are unnecessary: for a broad class of rules that satisfy a minimal set of physical correctness properties, including conservation of energy, termination is guaranteed after finitely many iterations, except for a well-characterized set of failure cases that can be detected and corrected. The talk will not assume any detailed knowledge of physics, beyond a conceptual understanding of high school Newtonian mechanics.

### April 28, 2017Yin Tat Lee, University of Washington Eldan's Stochastic Localization and the KLS Hyperplane Conjecture: An Improved Lower Bound for Expansion

Abstract: The KLS conjecture asserts that the isoperimetric constant (expansion) of any isotropic convex set is lower bounded by a constant. Put it differently, the conjecture asserts that any convex set is an "expander" after some linear transformation. This conjecture has lots of applications in both computer science and mathematics.

In this talk, I will explain the importance of this conjecture, its applications to computer science and give a proof that the KLS constant for n-dimensional isotropic logconcave measures is O(n^1/4).

Joint work with Santosh Vempala.

### May 5, 2017Adam Bouland, MIT On The Power of Statistical Zero Knowledge

Abstract: We examine the power of statistical zero knowledge proofs (captured by the complexity class SZK) and their variants. First, we give the strongest known relativized evidence that SZK contains hard problems, by exhibiting an oracle relative to which SZK (indeed, even NISZK) is not contained in the class UPP, containing those problems solvable by randomized algorithms with unbounded error. This answers an open question of Watrous from 2002 [Aar]. Second, we “lift” this oracle separation to the setting of communication complexity, thereby answering a question of Goos et al. (ICALP 2016). Third, we give relativized evidence that perfect zero knowledge proofs (captured by the class PZK) are weaker than general zero knowledge proofs. Specifically, we exhibit oracles relative to which SZK != PZK, NISZK != NIPZK, and PZK != coPZK. The first of these results answers a question raised in 1991 by Aiello and Hastad (Information and Computation), and the second answers a question of Lovett and Zhang (2016). We also describe additional applications of these results outside of structural complexity.

The technical core of our results is a stronger hardness amplification theorem for approximate degree, which roughly says that composing the gapped-majority function with any function of high approximate degree yields a function with high threshold degree.

Based on joint work with Lijie Chen, Dhiraj Holden, Justin Thaler, and Prashant Nalini Vasudevan

### May 19, 2017Shelby Kimmel, University of Maryland Path Detection: A Quantum Computing Primitive

Abstract: "st-connectivity" is the problem of deciding whether two points in a graph are connected or not (i.e. whether there is a pathbetween them). I will show that any Boolean formula evaluation problem can be transformed into an st-connectivity problem, so good algorithms for st-connectivity potentially give good algorithms for formula evaluation. I will discuss a quantum algorithm for st-connectivity that is relatively straightforward to analyze, and that is also optimal for evaluating many Boolean formulas.