Qiang Liu

alt text 

Qiang Liu
Ph.D. Candidate
Advisor: Prof. Alexander Ihler
Information & Computer Science
University of California at Irvine


I am in my 5th year of my PhD. I am currently supported by a Microsoft Research PhD fellowship.


I am co-organizing an ICML’13 workshop on Machine learning meets crowdsourcing.

Research

My research area is machine learning, focusing on problems of learning, inference, decision making and various applications under the framework of probabilistic graphical models.

Learning . The first step of machine learning tasks is usually to learn models from empirical data, either to estimate the model parameters with predefined model structures, or even to estimate the model structures solely from data. I am interested in developing efficient, possibly distributed, learning algorithms, that perform well on real world data.

  • Here is an efficient distributed learning algorithm based on smartly combining local estimators defined by pseudo-likelihood components: ICML2012.

  • Here is a structure learning algorithm for recovering scale-free networks, thought to appear commonly in the real world: AISTATS2011 (notable paper award).

Inference . With given graphical models, either handcrafted or learned from data, inference refers to answering queries, such as marginal probability (or partition function), maximum a posteriori (MAP) estimation, or marginal MAP, the hybrid of marginalization and MAP. I am interested in developing efficient inference algorithms, mostly based on variational methods and in the form of belief-propagation-like message passing algorithms.

  • Marginal MAP is notoriously difficult even on tree-structured graphs. We developed a general variational dual representation for marginal MAP, and propose a set of variational approximation algorithms, including an interesting “mixed-product” BP that is a hybrid of max-product, sum-product and a special “argmax-product” message updates, and a convergent proximal point algorithm that works by iteratively solving pure marginalization tasks. See UAI2011 (Slides).

  • We proposed an efficient approximate inference algorithm for calculating the log-partition function that unifies Rina Dechter's “one-pass” mini-bucket algorithm with iterative variational algorithms, such as tree reweighted BP. Our method inherits the advantages of both, and easily scales to large clique sizes. Our algorithm can provide both upper and lower bounds for the log-partition function. See ICML2011.

  • Tree reweighted BP provides an upper bound on the log-partition function, while naïve mean field and structured mean field give lower bounds. We show that tree reweighted BP provably gives a lower bound if its weights are set to take negative values in a particular way. We also show that such “negative” tree reweighted BP reduces to structured mean field as the weights approach infinity. For the full story, see UAI2010.

Structured decision making. In practice, we often need to take a sequence of actions to achieve a predefined goal, usually under uncertain environments where information is observed sequentially and interactively as we progress. Decision networks (also called influence diagrams) are graphical model style representations of such structured decision making problems under uncertainty. Just like Bayesian networks generalize Markov chains or hidden Markov chains, decision networks generalize Markov decision processes (MDP), or partially observable decision processes (POMDP). Unfortunately, the problem of finding the optimal actions for decision networks is much more challenging than answering queries on Bayesian networks, especially in cases where limited information is observed or where multi-agent cooperation is required (such as in robot soccer games).

  • We extend the powerful variational inference framework for solving decision networks, based on which we propose an efficient BP-type algorithm and a convergent proximal point algorithm. Our framework enables us to translate basically any variational algorithm to solve influence diagrams. See UAI2012.

Applications. I am interested in applying these machine learning methods in many application areas.

  • Crowdsourcing:

    • We reform the problem of aggregating crowdsourced labels into a standard inference problem on a factor graph, and propose a class of efficient BP-type algorithms. We show that our BP algorithm generalizes and outperforms a previous algorithm by Karger et al. 2012. For the full story, see NIPS2012.

  • Natural language processing:

    • How well can computers solve the SAT sentence completion question? This is the work I involved when I was interning in Microsoft Research Redmond, ACL2012.

  • Sensor networks:

    • Here is a distributed algorithm for learning parameters in sensor networks: ICML2012.

    • My algorithm for solving influence diagrams provides a powerful way to design optimal decentralized detection networks: UAI2012.