Graphical models have become an important field of machine learning research. In many applications, the structure of graphical models is unknown and has to be learnt from real data. The structure learning problem is interesting both from a theoretical and a practical aspect.  On one hand, many statistical estimators have been proposed and carefully studied for learning graphical models. On the other hand, as the size of data becomes larger and larger, how to efficiently compute the estimators in datasets with extremely large dimensionality becomes an important issue. In this workshop, we focus on the structure learning problem for Gaussian graphical models. Topics of interest include, but are not limited to, the following:
  • Theoretical foundations for learning structure of Gaussian graphical models -- statistical estimators and their sample complexities.
  • Efficient optimization techniques for computing the statistical estimators.
  • Multi-core or distributed algorithms for learning the structure of extremely high dimensional problems.
  • Estimating inverse covariance matrices under special scenarios, for example, time series or missing data.
  • Real world applications which require learning graphical model structures.

Important information:

  • Workshop: June 26, 2014.
  • Location: Beijing, China.
  • Primary contact: Cho-Jui Hsieh (cjhsieh@cs.utexas.edu)
  • Pradeep Ravikumar (UT Austin)
  • Peder Olsen (IBM Research)
  • Arindam Banerjee (University of Minnesota, Twin Cities)
  • Rahul Mazumder (Columnbia University)
  • Po-Ling Loh (UC Berkeley)
  • Jason Lee (Stanford)
  • Weibin Zhang (Hong Kong University of Science and Technology)
  • Steven Rennie (IBM Research)
  • Cho-Jui Hsieh (UT Austin)