Schedule

For Spring 2018, the Theory Seminar will meet on Fridays from 2:00-3:00pm in GDC 4.302 unless otherwise noted. This schedule is being updated throughout the semester.

1/12/2018 – Keerti Choudhary, Weizmann Institute: “Fault Tolerant Structures for Reachability and Strong-connectivity”
1/19/2018 – Ryan Williams, MIT: "Circuit Lower Bounds for Nondeterministic Quasi-Polytime"
2/02/2018 – Adam Klivans, University of Texas at Austin: "Learning Discrete Markov Random Fields with Nearly Optimal Runtime and Sample Complexity"
2/09/2018 – Chandra Chekuri, University of Illinois: "Speeding up MWU Based Approximation Schemes for Positive LPs and Some Applications"
2/16/2018 – Avishay Tal, Stanford University: "Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity"
3/02/2018 – Alexandros Dimakis, University of Texas at Austin: "Generative Models and Compressed Sensing"
3/23/2018 – Li-Yang Tan, TTI-Chicago: "Fooling Polytopes"
3/30/2018 – Seth Pettie, University of Michigan: "A Time Hierarchy Theorem for the LOCAL Model"
4/13/2018 – Prashant Nalini Vasudevan, MIT: "Fine-Grained Cryptography"
4/20/2018 – Joseph Neeman, University of Texas at Austin: "Learning and Testing Linear Juntas"
4/27/2018 – Toni Pitassi, University of Toronto: "Lifting in Communication Complexity, and Applications"
5/04/2018 – Michael Saks, Rutgers University: "Lower Bounds for Online Labeling"
5/11/2018 – Jason D. Lee, USC: "Geometry of Optimization Landscapes and Implicit Regularization of Optimization Algorithms"

1/12/2018
Keerti Choudhary, Weizmann Institute:
Fault Tolerant Structures for Reachability and Strong-Connectivity
 

In this talk, we look at the problems of single-source-reachability (SSR) and strongly-connected-components (SCC) in presence of failures of vertices and edges. In the static setting, these problems can be solved in O(|V|+|E|) time, for any directed graph G=(V, E). To model the scenario of a faulty network, we associate a parameter k with the network such that there are at most k vertices (or edges) that are failed at any stage. The goal is to preprocess the graph G and compute a compact data structure, that, given any set F of at most k vertices (or edges), efficiently solves the SSR and SCC problems for the graph G\F. 

We show that, given a directed graph G with n vertices, and a positive integer k, we can obtain a fault tolerant data structure that solves the SSR and SCC problems for G\F in O(2^k n polylog n) time for any set F of size at most k. For the problem of SSR, we show a much stronger result, i.e., for every positive integer k, there is a subgraph (sparse certificate) H of G with at most (2^k n) edges such that vertices reachable from source in G\F and H\F are identical, for every F of size at most k. Our results makes an interesting usage of the simple techniques like max-flows, farthest min-cuts, and heavy path decomposition.

1/19/2018
Ryan Williams, MIT:
Circuit Lower Bounds for Nondeterministic Quasi-Polytime

​I will give an overview of the recent proof (with my PhD student Cody Murray) that the class NQP = NTIME(n^{polylog n}) does not have ACC circuits of n^{log n} size (even with a layer of threshold gates at the bottom). This is achieved by strengthening a key component of the proof of NEXP not in ACC; in particular, we prove a new "Easy Witness Lemma" for NQP. 

2/02/2018
Adam Klivans, University of Texas at Austin
Learning Discrete Markov Random Fields with Nearly Optimal Runtime and Sample Complexity

We give an algorithm for learning the structure of an undirected graphical model over any finite alphabet that has nearly optimal runtime and sample complexity.  We make no assumptions on the structure of the graph.  For Ising models, this subsumes and improves on all prior work.  For higher-order MRFs, these are the first results of their kind.

Our approach is new and uses a multiplicative-weight update algorithm.  Our algorithm-- Sparsitron-- is easy to implement (has only one parameter) and holds in the online setting. It also gives the first provably efficient solution to the problem of learning sparse Generalized Linear Models (GLMs).

Joint work with Raghu Meka

2/09/2018
Chandra Chekuri, University of Illinois:
Speeding up MWU Based Approximation Schemes for Positive LPs and Some Applications

One of the recent exciting directions in algorithms is developing faster algorithms for various problems in discrete and combinatorial optimization via sophisticated continuous optimization methods. The multiplicative weight update (MWU) method is an older technique and has been used to approximately solve positive linear programs (packing, covering and mixed packing and covering LPs) starting from the early 90s. MWU based methods have some advantages when dealing with implicit linear programs that arise frequently in combinatorial optimization. We build upon several existing ideas and propose some new ones to obtain significantly faster algorithms for several problems. The talk will illustrate some examples including Metric-TSP, spanning tree packing, and geometric packing and covering problems.

Based on joint work with Kent Quanrud.

2/16/2018
Avishay Tal, Stanford University
​Improved Pseudorandomness for Unordered Branching Programs through Local Monotonicity

We present an explicit pseudorandom generator with polylog(n) seed length for read-once constant-width branching programs that can read their $n$ input bits in any order. This improves upon the work of Impagliazzo, Meka, and Zuckerman (FOCS, 2012), where they required seed length $n^{1/2+o(1)}$.

A central ingredient in our work is a bound on the Fourier spectrum of constant-width branching programs, settling a conjecture posed by Reingold, Steinke, and Vadhan (RANDOM, 2013).

Our analysis crucially uses a notion of local monotonicity on the edge labeling of the branching program. We carry critical parts of our proof under the assumption of local monotonicity and show how to deduce our results for unrestricted branching programs.

Joint work with Eshan Chattopadhyay, Pooya Hatami, and Omer Reingold

3/2/2018
Alexandros Dimakis, University of Texas at Austin
Generative Models and Compressed Sensing

The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge. For most results in the literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we assume that the unknown vectors lie near the range of a generative model, e.g. a GAN or a VAE. We show how the problems of image inpainting and super-resolution are special cases of our general framework. 

We show how to generalize the RIP condition for generative models and that random gaussian measurement matrices have this property with high probability. A Lipschitz condition for the generative neural network is the key technical issue for our results. Time permitting we will discuss how this method can be used to protect from adversarial examples. 

(Based on collaborations with Ashish Bora, Ajil Jalal and Eric Price)

3/23/2018
Li-Yang Tan, TTI-Chicago
​Fooling Polytopes 

We give an explicit pseudorandom generator with seed length poly(log m,1/delta) log(n) that delta-fools intersections of m halfspaces over {0,1}^n (equivalently, m-facet polytopes).  Joint work with Ryan O'Donnell (CMU) and Rocco Servedio (Columbia).  

3/30/2018
Seth Pettie, University of Michigan
A Time Hierarchy Theorem for the LOCAL Model

The celebrated "Time Hierarchy Theorem" for Turing machines states, informally,
that more problems can be solved given more time.  The extent to which a time hierarchy-type theorem
holds in the classic distributed LOCAL model has been open for many years.  In particular, it is consistent
with previous results that all natural problems in the LOCAL model can be classified according
to a small constant number of complexities, such as $O(\log^* n), O(\log n), 2^{O(\sqrt{\log n})}$, etc.

We establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the 
LOCAL time hierarchy.  One of the gap results can be interpreted as showing that the distributed Lovasz
local lemma is complete for randomized sublogarithmic time.

Joint work with Yi-Jun Chang.  Extended abstract appeared in FOCS 2017.  Full version at arXiv:1704.06297.

4/13/2018
Prashant Nalini Vasudevan, MIT
​Fine-Grained Cryptography

Fine-grained cryptography is the study of cryptographic objects that are required to be secure only against adversaries that are moderately more powerful than the honest parties. This weakening in security requirements open up possibilities for meaningful cryptographic constructions in various settings using hardness assumptions that are considerably weaker than those used in standard cryptography. In this talk, I will provide a brief overview of the area and of two of our papers:

1. In the first, we construct several unconditionally secure cryptographic primitives that are computable by and secure against constant-depth circuits. Under a reasonable complexity-theoretic assumption, we do the same for log-depth circuits.

2. In the second, we present functions that are hard to compute on average for algorithms running in some fixed polynomial time, assuming widely-conjectured worst-case hardness of certain problems from the study of fine-grained complexity. We also construct a proof-of-work protocol based on this hardness and certain structural properties of our functions. 

Based on joint work with Marshall Ball, Akshay Degwekar, Alon Rosen, Manuel Sabin, and Vinod Vaikuntanathan.

4/20/2018
Joseph Neeman, University of Texas at Austin
Learning and Testing Linear Juntas

Suppose you have a function on a high-dimensional space, but it is secretly just a function of some low-dimensional subspace. Such a function is called a "linear k-junta," where k is the dimension of the small subspace. I'll present a polynomial-time algorithm that evaluates a function on poly(k) points and then decides whether it is a k-junta. If it is, I'll present another algorithm that can learn the function in exp(k) queries. The important point is that for both algorithms, the query complexity doesn't depend on the ambient dimension.

4/27/2018
Toni Pitassi, University of Toronto
Lifting in Communication Complexity, and Applications

Hardness escalation is a growing research area whereby lower bounds and separations in communication complexity are obtained by developing simulation or lifting theorems. The basic idea of a lifting is to start with an arbitrary function and ``lift it" via function composition in order to get a new function whose communication complexity is essentially the same as the query complexity of the original function. In query complexity, the objects of study are decision trees, perhaps the simplest, most basic model of computation.  A simulation theorem thus shows that the optimal communication protocol for the composed function is essentially the trivial one which mimics the decision tree protocol.

In this talk we will review the basic lifting theorems that have been proven, and discuss their many applications, including game theory, extension complexity, proof complexity, and circuit complexity. We will pay special attention to a recent lifting theorem (joint with Robert Robere) that translates Nullstellensatz degree lower bounds to lower bounds for Gal's rank measure. This lifting theorem implies exponential monotone span program lower bounds and separations over all finite fields.

5/04/2018
Michael Saks, Rutgers University
Lower Bounds for Online Labeling

The online labeling problem (also known as the file maintenance problem), is a natural algorithmic problem that has arisen as a building block for data structures. A stream of distinct integer items is to be assigned labels online from the label set {1,....,m} so that the order of the labels strictly respects the natural order of the items.  Maintaining this order condition may require relabeling items.  The algorithm pays 1 each time an item is labeled or relabeled and the goal of the algorithm is to minimize the total cost.  

 I'll survey upper and lower bounds and open problems with an emphasis on recent lower bounds in both the deterministic and randomized setting. 

5/11/2018
Jason D. Lee, USC
Geometry of Optimization Landscapes and Implicit Regularization of Optimization Algorithms

We first study the problem of learning a Gaussian input two-layer ReLU network with positive output layer and  and the symmetric matrix completion problem. Despite the non-convexity of both problems, we prove that every local minimizer is a global minimizer. Since gradient descent converges to local minimizers, this shows that simple gradient-based methods can find the global optimum of these non-convex problems.

In the second part, we analyze the implicit regularization effects of various optimization algorithms. In particular we prove that for least squares with mirror descent, the algorithm converges to the closest solution in terms of the bregman divergence. For linearly separable classification problems, we prove that the steepest descent with respect to a norm solves SVM with respect to the same norm. For over-parametrized non-convex problems such as matrix sensing or neural net with quadratic activation, we prove that gradient descent converges to the minimum nuclear norm solution, which allows for both meaningful optimization and generalization guarantees.