Adam Klivans, a professor of computer science at the University of Texas at Austin, has received the 20-Year Test of Time Award at the 66th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2025), one of the field’s most prestigious conferences. The award honors research published two decades ago that has continued to shape the direction of computer science.
The recognition is for a 2005 paper titled “Agnostically Learning Halfspaces,” which Klivans co-authored with Adam Tauman Kalai (TTI-Chicago), Yishay Mansour (Tel Aviv University), and Rocco A. Servedio (Columbia University).
The Test of Time Award is reserved for work that has proven not only influential at the moment of publication, but enduring—research that remains relevant as the field evolves.
Learning from Imperfect Data
At the heart of Klivans’s paper is a problem that lies at the center of modern machine learning: how computers can learn from data that is incomplete, unreliable, or simply wrong.
In theory, many learning algorithms assume a clean world, where data is perfectly labeled and patterns are easy to detect. In practice, real-world data is noisy. Labels can be mistaken, biased, or even deliberately corrupted.
“Agnostically Learning Halfspaces” tackled this challenge head-on. The paper showed that it is possible to efficiently learn a simple but powerful type of decision rule—known as a halfspace, or a dividing line that separates data into two groups—even when the data is riddled with errors.
This represented a major advance over earlier approaches, which largely depended on the unrealistic assumption that data could be neatly separated without mistakes.
A Lasting Impact
The ideas introduced in the paper helped open new directions in computational learning theory, influencing how researchers think about robustness and reliability in machine learning systems. Over the years, the work has been cited and extended across a wide range of theoretical and applied research.
This year, Klivans and collaborators resolved a long-standing question in a new paper spotlighted at NeurIPS 2025, another leading conference in artificial intelligence.
The follow-up paper, “The Power of Iterative Filtering for Supervised Learning with (Heavy) Contamination,” was authored by Klivans along with his student Konstantinos Stavropoulos, postdoctoral researcher Arsen Vasilyan, and Kevin Tian.
The work introduces a general technique that repeatedly filters out suspicious or corrupted data points, allowing a learning algorithm to focus on what remains. Even when a large fraction of the data has been compromised, the method can recover enough reliable information for accurate learning.
The approach extends the original insights of “Agnostically Learning Halfspaces” to an even broader and more realistic setting, resolving a key open problem that had remained unsolved for nearly 20 years.
About Adam
Adam Klivans is a professor of computer science at UT Austin, where he works in theoretical computer science, machine learning, and protein engineering. He holds the Admiral B.R. Inman Centennial Chair in Computing Theory and serves as director of both the NSF Institute for Foundations of Machine Learning (IFML) and the UT Austin Machine Learning Lab.



