I'm Xiangru Huang (黄相如). I'm currently a PhD student in Computer Science Department of University of Texas at Austin, working with Qixing Huang . My research focuses on Machine Learning, Optimization and their applications. I received my Bachelor's Degree from Shanghai JiaoTong University, ACM Honored Class. For more details, please check my CV.
Email: xrhuang at cs.utexas.edu
PPDSparse: A Parallel Primal and Dual Sparse Method to Extreme Classification.
Ian E.H. Yen, Xiangru Huang, Wei Dai, Pradeep Ravikumar, Inderjit S. Dhillon and Eric P. Xing. To appear in ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2017.
Greedy Direction Method of Multiplier for MAP Inference of Large Output Domain.
Xiangru Huang, Ian E.H. Yen, Ruohan Zhang, Qixing Huang, Pradeep Ravikumar and Inderjit S. Dhillon.
Artificial Intelligence and Statistics (AISTATS), 2017.
Dual Decomposed Learning with Factorwise Oracle for Structural SVM of Large Output Domain. [pdf]
Ian E.H. Yen, Xiangru Huang, Kai Zhong, Ruohan Zhang, Pradeep Ravikumar and Inderjit S. Dhillon.
In Advances in Neural Information Processing Systems (NIPS), 2016.
PD-Sparse: A Primal and Dual Sparse Approach to Extreme Multiclass and Multilabel Classification.
Ian E.H. Yen*, Xiangru Huang*, Kai Zhong, Pradeep Ravikumar and Inderjit S. Dhillon. (* equally contributed)
In International Conference on Machine Learning (ICML), 2016.
Trial and error in influential social networks.
Xiaohui Bei, Ning Chen, Liyu Dou, Xiangru Huang, Ruixin Qiang. (ordered alphabetically by last name)
In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2013.
A efficient and accurate solver for MAP inference problems. Compared to state-of-the-art solvers (e.g. TRW-S, AD3, MPLP, etc.), our solvers is orders of magnitude faster without downgrading accuracy.
Efficient solver for multiclass and multilabel classification problems. Designed to solve problems with millions of classes/labels with single core. We're working on a distributed version of this. (for details please check this ICML 2016 paper)