I'm a 4th year Ph.D. student (advanced to candidacy) in the Computer Science Department at University of California, Irvine, under the supervision of Prof. Alexander Ihler. I also work closely with Prof. Rina Dechter.

I'm generally interested in artificial intelligence, machine learning, and computer vision. Currently my research mainly focuses on inference algorithms for probabilistic graphical models, integrating heuristic search, variational bounds, and Monte Carlo methods. I'm also working on video ads analysis using deep neural networks.

Qi Lou, Rina Dechter, Alexander Ihler
The 32nd AAAI Conference on Artificial Intelligence (AAAI-18). [to appear]
    Marginal MAP is a key task in Bayesian inference and decision-making. It is known to be very difficult in general, particularly because the evaluation of each MAP assignment requires solving an internal summation problem. In this paper, we propose a best-first search algorithm that provides anytime upper bounds for marginal MAP in graphical models. It folds the computation of external maximization and internal summation into an AND/OR tree search framework, and solves them simultaneously using a unified best-first search algorithm. The algorithm avoids some unnecessary computation of summation sub-problems associated with MAP assignments, and thus yields significant time savings. Furthermore, our algorithm is able to operate within limited memory. Empirical evaluation on three challenging benchmarks demonstrates that our unified best-first search algorithm using pre-compiled variational heuristics often provides tighter anytime upper bounds compared to those state-of-the-art baselines.
Qi Lou, Rina Dechter, Alexander Ihler
The 31st Annual Conference on Neural Information Processing Systems (NIPS-17). [poster]
     Computing the partition function is a key inference task in many graphical models. In this paper, we propose a dynamic importance sampling scheme that provides anytime finite-sample bounds for the partition function. Our algorithm balances the advantages of the three major inference strategies, heuristic search, variational bounds, and Monte Carlo methods, blending sampling with search to refine a variationally defined proposal. Our algorithm combines and generalizes recent work on anytime search and probabilistic bounds of the partition function. By using an intelligently chosen weighted average over the samples, we construct an unbiased estimator of the partition function with strong finite-sample confidence intervals that inherit both the rapid early improvement rate of sampling and the long-term benefits of an improved proposal from search. This gives significantly improved anytime behavior, and more flexible trade-offs between memory, time, and solution quality. We demonstrate the effectiveness of our approach empirically on real-world problem instances taken from recent UAI competitions.
Qi Lou, Rina Dechter, Alexander Ihler
The 31st AAAI Conference on Artificial Intelligence (AAAI-17).
    Bounding the partition function is a key inference task in many graphical models. In this paper, we develop an anytime anyspace search algorithm taking advantage of AND/OR tree structure and optimized variational heuristics to tighten deterministic bounds on the partition function. We study how our priority-driven best-first search scheme can improve on stateof-the-art variational bounds in an anytime way within limited memory resources, as well as the effect of the AND/OR framework to exploit conditional independence structure within the search process within the context of summation. We compare our resulting bounds to a number of existing methods, and show that our approach offers a number of advantages on realworld problem instances taken from recent UAI competitions.
Wentao Zhu, Qi Lou, Yeeleng Scott Vang, Xiaohui Xie
The 20th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI-17).
    Mammogram classification is directly related to computer-aided diagnosis of breast cancer. Traditional methods requires great effort to annotate the training data by costly manual labeling and specialized computational models to detect these annotations during test. Inspired by the success of using deep convolutional features for natural image analysis and multi-instance learning for labeling a set of instances/patches, we propose end-to-end trained deep multi-instance networks for mass classification based on whole mammogram without the aforementioned costly need to annotate the training data. We explore three different schemes to construct deep multi-instance networks for whole mammogram classification. Experimental results on the INbreast dataset demonstrate the robustness of proposed deep networks compared to previous work using segmentation and detection annotations in the training.
Qi Lou, Raviv Raich, Forrest Briggs, Xiaoli Fern
IEEE International Workshop on Machine Learning for Signal Processing, 2013.
    Novelty detection plays an important role in machine learning and signal processing. This paper studies novelty detection in a new setting where the data object is represented as a bag of instances and associated with multiple class labels, referred to as multi-instance multi-label (MIML) learning. Contrary to the common assumption in MIML that each instance in a bag belongs to one of the known classes, in novelty detection, we focus on the scenario where bags may contain novel-class instances. The goal is to determine, for any given instance in a new bag, whether it belongs to a known class or a novel class. Detecting novelty in the MIML setting captures many real-world phenomena and has many potential applications. For example, in a collection of tagged images, the tag may only cover a subset of objects existing in the images. Discovering an object whose class has not been previously tagged can be useful for the purpose of soliciting a label for the new object class. To address this novel problem, we present a discriminative framework for detecting new class instances. Experiments demonstrate the effectiveness of our proposed method, and reveal that the presence of unlabeled novel instances in training bags is helpful to the detection of such instances in testing stage.
Forrest Briggs, Xiaoli Fern, Raviv Raich, Qi Lou
ACM Transactions on Knowledge Discovery from Data, Vol. 7 Iss. 3, 2013.
    Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen bags. We instead consider the problem of predicting instance labels while learning from data labeled only at the bag level. We propose a regularized rank-loss objective designed for instance annotation, which can be instantiated with different aggregation models connecting instance-level labels with bag-level label sets. The aggregation models that we consider can be factored as a linear function of a “support instance” for each class, which is a single feature vector representing a whole bag. Hence we name our proposed methods rank-loss Support Instance Machines (SIM). We propose two optimization methods for the rank-loss objective, which is nonconvex. One is a heuristic method that alternates between updating support instances, and solving a convex problem in which the support instances are treated as constant. The other is to apply the constrained concave-convex procedure (CCCP), which can also be interpreted as iteratively updating support instances and solving a convex problem. To solve the convex problem, we employ the Pegasos framework of primal subgradient descent, and prove that it finds an ε-suboptimal solution in runtime that is linear in the number of bags, instances, and 1/ε. Additionally, we suggest a method of extending the linear learning algorithm to nonlinear classification, without increasing the runtime asymptotically. Experiments on artificial and real-world datasets including images and audio show that the proposed methods achieve higher accuracy than other loss functions used in prior work, e.g., Hamming loss, and recent work in ambiguous label classification.
Raviv Raich, Forrest Briggs, Qi Lou, Xiaoli Fern, David Mellinger
Proceedings of Meetings on Acoustics Vol. 19 Iss. 1, 2013
    We consider the problem of in-situ species monitoring across large spatial and temporal scales. We are interested in the scenario in which omnidirectional microphones are deployed for round-the-clock in-situ species monitoring. In such a scenario, recordings may suffer from two problems: (i) low signal-to-noise ratio and (ii) simultaneous vocalizations of multiple individuals. The multiple instance multiple label framework is commonly used to analyze complex data using the bag-of-instances representation. We propose a framework for converting an audio recording to a collection of instances for species classification.
Qi Lou, Ligang Liu
Computers & Graphics 36(5): 309-320 (2012). Oral presentation at Shape Modeling International 2012
    This paper presents a novel approach, called hybrid clipping, for computing all intersections between two polynomial Bézier curves within a given parametric domain in the plane. Like Bézier clipping, we compute a ‘fat line’ (a region along a line) to bound one of the curves. Then we compute a ‘fat curve’ around the optimal low degree approximation curve to the other curve. By clipping the fat curve with the fat line, we obtain a new reduced subdomain enclosing the intersection. The clipping process proceeds iteratively and then a sequence of subdomains that is guaranteed to converge to the corresponding intersection will be obtained. We have proved that the hybrid clipping technique has at least a quadratic convergence rate. Experimental results have been presented to show the performance of the proposed approach with comparison with Bézier clipping.

Research Intern at Adobe Research image
    Big Data Experience Lab, hosted by Dr. Somdeb Sarkhel, Dr. Saayan Mitra, and Dr. Vishy Swaminathan. 06/2017 ~ 09/2017.
Research Intern at Virginia Tech
    Machine Learning & Perception Group, hosted by Prof. Dhruv Batra. 08/2013 ~ 11/2013, 04/2014 ~ 08/2014.

Teaching assistant for
     CS 171 Intro to Artificial Intelligence
     CS 178 Machine Learning and Data Mining
     CS 179 Intro to Graphical Models