NIPS 2010 Workshop

Robust Statistical learning

(robustml)

overview  |  Invited speakers  |  format  |  CFP | organizers

 

At the core of statistical machine learning is to infer conclusions from data, typically using statistical models that describe probabilistic relationships among the underlying variables. Such modeling allows us to make strong predictions even from limited data by leveraging specific problem structure. However on the flip side, when the specific model assumptions do not exactly hold, the resulting methods may deteriorate severely.  A simple example: even a few corrupted points, or points with a few corrupted entries, can severely throw off standard SVD-based PCA.


The goal of this workshop is to investigate this "robust learning" setting where the data deviate from the model assumptions in a variety of different ways. Depending on what is known about the deviations, we can have a spectrum of approaches:


(a) Dirty Models: Denote "clean" statistical models as those that impose "clean" structural assumptions such as sparsity, low-rank etc. Indeed such methods have proven very effective at imposing bias without being overly restrictive. Thus if in addition one has prior knowledge about the structure of the deviations as well, one can now use a combination of two (or more) clean models to develop a robust method. For example, approximating data by the sum of a sparse matrix and a low-rank one leads to PCA that is robust to corrupted entries.

(b) Robust Optimization: Most statistical learning methods implicitly or explicitly have an underlying optimization problem. Robust optimization uses techniques from convexity and duality, to construct solutions that are immunized from some bounded level of uncertainty, typically expressed as bounded (but otherwise arbitrary, i.e., adversarial) perturbations of the decision parameters.


(c) Classical Robust Statistics; Adversarial Learning: There has been a large body of work on classical robust statistics, which develops estimation methods that are robust to misspecified modeling assumptions in general, and do not model the outliers specifically. While this area is still quite active, it has a long history, with many results developed in the 60s, 70s and 80s. There has also been significant recent work in adversarial machine learning: here, the models allow for arbitrary covariates and responses, and do not assume that these are drawn from any specific parametric distribution. In particular, such techniques could be applied to robust learning settings where the response could be arbitrarily deviated from some function of the response.


Thus, we see that while there has been a resurgence of robust learning methods (broadly understood) in recent years, it seems to be largely coming from different communities that rarely interact: (classical) robust statistics, adversarial machine learning, robust optimization, and multi-structured or dirty model learning. It is the aim of this workshop to bring together researchers from these different communities, and identify common intuitions underlying such robust learning methods. In particular, we will be interested in understanding where techniques from one field may be applicable, and what their limitations are. As one very important example, we will consider the high dimensional regime, where it is not clear how to extend many of the techniques successful in the classical robust statistics setup. There has been a massive amount of recent interest and work in modeling such high-dimensional data, and the natural extension of such results would be to make them more robust. Indeed, with increasingly high-dimensional and "dirty" real world data that do not conform to clean modeling assumptions, this is a vital necessity.

Details


Workshop Date: Dec 10, 2010.

Workshop registration is open.


Schedule
















Spotlights:

High-Dimensional Robust Structure Learning of Ising Models on Sparse Random Graphs. Animashree Anandkumar, MIT; Vincent Y. F. Tan, MIT; Alan S. Willsky, MIT.

Learning from Noisy Data under Distributional Assumptions. Nicolo Cesa-Bianchi, Universita degli Studi di Milano; Shai Shalev-Shwartz, The Hebrew University; Ohad Shamir, Microsoft Research.

Robust Matrix Decomposition with Outliers. Daniel Hsu, Rutgers University; Sham M. Kakade, University of Pennsylvania; Tong Zhang, Rutgers University.

Regularization via Statistical Stability. Chinghway Lim, UC Berkeley; Bin Yu, UC Berkeley.

Weighted Neighborhood Linkage. Pramod Gupta, Georgia Institute of Technology; Maria Florina Balcan, Georgia Institute of Technology.

Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming. Alexandre Belloni, Duke University; Victor Chernozhukov, MIT; Lie Wang, MIT.

On Robustness in Kernel Based Regression. Kris De Brabanter, Dept. of Electrical Engineering ESAT-SCD Katholieke Universiteit Leuven.

Organizers


Pradeep Ravikumar, UT Austin

Constantine Caramanis, UT Austin

Sujay Sanghavi, UT Austin