Spring 2005 seminar schedule can be found here.

The seminar meets on **Fridays** at **11 a.m.** in **ACES 3.408**.

SPRING 2005 SCHEDULE

Fri, Sep. 23 |
Anup Rao
Extractors for a Constant Number of Polynomial Min-Entropy Independent Sources |

Fri, Sep. 30 |
Bill Gasarch, University of Maryland
Communication Complexity of Hamming Distance |

Fri, Oct. 14 |
Anindya Patthak
Correcting Errors Beyond the GS Radius (Parvaresh-Vardy '05) |

Thu, Oct. 27 |
Leslie Valiant, Harvard University
A Quantitative Theory of Neural Computation (NOTE: TAYLOR 2.106 at 3:45pm) |

For more information on the ACT seminar please contact Adam Klivans (klivans "at" cs.utexas.edu) or Vijaya Ramachandran (vlr "at" cs.utexas.edu).

Anup Rao "Extractors for a Constant Number of Polynomial Min-Entropy Independent Sources"

We consider the problem of bit extraction from independent sources. We construct an extractor that can extract from a constant number of independent sources of length n, each of which have min-entropy n^gamma for an arbitrarily small constant gamma > 0. Our constructions are different from recent extractor constructions \cite{BarakIW04, BarakKSSW05, Raz05, Bourgain05} for this problem in the sense that they do not rely on any results from additive number theory. They are obtained by composing previous constructions of strong seeded extractors in simple ways.

Bill Gasarch, University of Maryland: "Communication Complexity of Hamming Distance."

Alice and Bob want to know if two strings of length n are almost equal. That is, do they differ on at most d bits? We are concerned with how many bits they must communicate to determine this.

We show that 1) If 0 <= d <= O(squareroot n) then any deterministic protocol for this problem requires at least n bits. 2) If 0 <= d <= n-1 then any deterministic protocol for this problem requiers at least n-2 bits.

Our results are obtained by lower-bounding the ranks of the appropriate matrices. Hence our lower bounds also apply to some quantum communication complexity models.

Joint work with Andris Ambainis, Aravind Srinivasan, and Andrey Utis.

We present a recent result due to Parvaresh and Vardy (FOCS 05): Given a recived word, outputting a list of codewords that have at least some minimum number of agreements with the received word is known as the List-decoding problem. Ideally we would like the list and the agreement to be small. However, as the number of agreements decreases, the list size increases and eventually becomes exponential.

In this talk we present a new-family of error-correcting codes that have poly-time encoder and poly-time list-decoder, corecting a fraction of adversarial errors up to 1- \sqrt[M+1]{R^M M^M} where $R$ is the rate of the code and $M\geq 1$ is an arbitrary integer. This allows it possible to decode beyond the Guruswami-Sudan radius of $ 1-\sqrt{R} $, for all rates less than 1/16.

Asymptotically, for any \eps>0, this allows one to list-decode in poly time a farction of errors up to 1-\eps with a code of length n and rate \Omega(\eps/\log (1/\eps) ), defined over an alphabet of size n^{O(\log 1/\eps)}.

Abstract:

A central open question of neuroscience is to identify the data structures and algorithms that are used in neural systems to support successive acts of basic tasks such as memorization and association. We describe a theory of neural computation based on three physical parameters: the number n of neurons, the number d of synaptic connections per neuron, and the inverse synaptic strength k expressed as the number of presynaptic action potentials needed to cause a postsynaptic action potential. Our fourth parameter r expresses the number of neurons that represent a real world item. We describe a computational mechanism for realizing hierarchical memorization and other cognitive tasks that implies a relationship among these four parameters. For the locust olfactory system estimates for all four parameters are available and we show that these numbers are in agreement with the theorys predictions. In human medial temporal lobe neurons that represent invariant concepts have been identified and we offer a quantitative mechanistic explanation of these otherwise paradoxical findings. More generally, we identify two useful regimes for neural computation, one with r and k large where each neuron may represent many items, and another in which r is small, k is 1 and every neuron represents at most one item.