Submitted by Valeria Sixtos on

According to UTCS Professor Dana Moshkovitz, randomized algorithms are sometimes faster than the best known deterministic algorithms—this is known to be true in the field of computer science. But her research group’s work proves that they can be quadratically faster, under plausible complexity assumptions. Under complexity assumptions, the upper bound on this is tight and problems exist for which randomization gives a quadratic speedup. Said bound follows from the first optimal construction of an object known as a "pseudorandom generator" which has long been studied in the theoretical area of computer science. The group of UT Computer Science (UTCS) researchers working alongside Moshkovitz comprised of UTCS professor David Zuckerman, grad student Justin Oh, and Dean Doron of Ben-Gurion University present these findings in their paper titled, “Nearly Optimal Pseudorandomness From Hardness".

Moshkovitz notes that the struggle with problems like “NP-hard problems and problems with algorithms look very much alike. Often very, very small changes … can make a problem shift between point numbers. The difference between “having a good algorithm and the best algorithm is exponential.”

Their research shows that existing proofs that deduce BPP=P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. They converted randomized algorithms into deterministic ones with little slowdown. “Specifically, assuming exponential lower bounds against nondeterministic circuits, they convert any randomized algorithm over inputs of length n running in time t n to a deterministic one running in time t2+ for an arbitrarily small constant 0. Such a slowdown is nearly optimal, as, under complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. They also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing).”

Justin Oh explains that “random” can be seen in flipping coins “because every single coin is completely random. And entropy means not every single coin is random. It only means some of the coins are.” The team showed “that [they] could construct this thing called a pseudoentropy generator, not a pseudorandom generator. It's not completely random, but it's still a little bit random with a very short seed. A surprisingly short seed.”

Their results follow from “a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+ )logs, under the assumption that there exists a function f E that requires nondeterministic circuits of size at least 2(1− )n, where =O( ). The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.”

The paper was accepted to the 2020 IEEE Symposium on Foundations of Computer Science (FOCS). FOCS annually covers a broad range of research and is one of the top theoretical computer science conferences. The team was able to achieve the result in an amazing amount of time – it took just a few months. Moshkovitz said it was a very short timeline and said it was “especially meaningful” as the team “had to overcome very serious obstacles.”