On the Use of Variational Inference for Learning Discrete Graphical Models (2011)
We study the general class of estimators for graphical model structure based on optimizing ell_1-regularized approximate log-likelihood, where the approximate likelihood uses tractable variational approximations of the partition function. We provide a message-passing algorithm that directly computes the ell_1 regularized approximate MLE. Further, in the case of certain reweighted entropy approximations to the partition function, we show that surprisingly the ell_1 regularized approximate MLE estimator has a closed-form, so that we would no longer need to run through many iterations of approximate inference and message-passing. Lastly, we analyze this general class of estimators for graph structure recovery, or its sparsistency, and show that it is indeed sparsistent under certain conditions.
View:
PDF
Citation:
In International Conference on Machine learning (ICML) 2011.
Bibtex:

Pradeep Ravikumar Formerly affiliated Faculty pradeepr [at] cs utexas edu
Eunho Yang Ph.D. Alumni eunho [at] cs utexas edu