##
Research Summary

The primary goal of my research is to understand whether access to random
numbers can make computation more efficient. An important advance in
computer science was the development of efficient randomized algorithms to
solve computational problems not known to have efficient deterministic
algorithms. Such problems include number-theoretic problems, such as factoring
polynomials, as well as a variety of approximation problems.

In practice, however, computers do not have access to truly random numbers.
Instead, they use the outputs of pseudorandom number generators or physical
devices. It is therefore natural and important to ask whether we can still
make use of the efficient randomized algorithms even if we have no source of
randomness, or perhaps a defective one. In my work, I have shown how to convert
randomized algorithms into robust randomized algorithms that work even if the
source of randomness is defective. For certain natural classes of algorithms,
I have been able to convert the randomized algorithms into deterministic ones.

The key tool for much of this work is the extractor. An extractor is an
efficient algorithm which extracts high-quality randomness from a
low-quality random source, using a few auxiliary high-quality random bits.
Building on my earlier work, Nisan and I defined and constructed the first good
extractor, and I have subsequently constructed several others. Moreover, I
have shown how extractors can be used to construct various types of pseudorandom
generators, results which do not directly involve low-quality random sources.
More surprisingly, I have shown that extractors have important applications in
seemingly unrelated areas, including non-blocking network and expander
constructions, coding theory, and the difficulty of approximating certain
problems.

Most of my work is joint with others; see my publications
for details.

Last modified: September 15, 2003