Depth-based hand pose estimation

October 2015Posted by James

 
splash for paper

Everyday Hands in Action

October 2015Posted by James

 
splash for paper
  • We present our paper on everyday hands in action.
  • G. Rogez, J. Supancic, D. Ramanan. "Understanding Everyday Hands in Action from RGB-D Images" International Conference on Computer Vision (ICCV), Santiago, Chile, December 2015.

Pose in Egocentric Workspaces

June 28th, 2015Posted by James

 
splash for paper
  • We present our paper on hand pose recognition in egocentric workspaces.
  • G. Rogez, J. Supancic, D. Ramanan. "First-Person Pose Recognition using Egocentric Workspaces" Computer Vision and Pattern Recognition (CVPR), Boston, MA, June 2015.

  • The title was formerly "Egocentric Pose Recognition in Four Lines of Code".

Hand Datasets and Methods

September 29th, 2015Posted by James

 
data and eval splash
  1. Our survey and evaluation of hand datasets and pose estimation methods is on arXiv
  2. J. Supancic, G. Rogez, Y. Yang, J. Shotton, D. Ramanan. "Depth-based hand pose estimation: methods, data, and challenges " arXiv preprint arXiv:1504.06378 2015.
  3. Links to datasets mentioned in the paper: KTH , LISA , ASTAR , MSR , NYU , ICL , FORTH , Dexter , ETH-Z , Max-Planck-Gesellschaft ( Synthetic , Real ), HandNet , FingerPaint

HANDS-2015 Workshop

headshot

July 30th, 2014Posted by James

 
I'm organizing a workshop and challange associated with CVPR-2015. More information can be hound here .
<

3D Hand Pose in Egocentric RGB+D

egocentric splash figure

Jan 5th, 2015Posted by James

 
Hand pose estimation from an egocentric view using random cascades with synthetic depth data
  • G. Rogez, M. Khademi, J. Supancic, J. Montiel, D. Ramanan. "3D Hand Pose Detection in Egocentric RGB-D Images" Workshop on Consumer Depth Cameras for Computer Vision, European Conference on Computer Vision (ECCV), Zurich, Switzerland, Sept. 2014. PDF

Self Paced Long Term Tracking

learning_splash_figure

April, 2013Posted by James

 
Our paper, "Self Paced Learning for Long-Term Tracking" was accepted for publication at CVPR 2013. The paper presents a novel technique for adapting an appearance model for long term tracking.

Displaying Real Valued Data in OpenCV

Dec 19th, 2012Posted by James

 
The imageeq function, which follows, equalizes a floating point image before displaying it. It is much better than MATLAB's imagesc for visualizing depth data.
void imageeq(const char* winName, cv::Mat_< float > im)
{
    // compute the order statistics
    vector< float > values;
    for(int rIter = 0; rIter < im.rows; rIter++)
        for(int cIter = 0; cIter < im.cols; cIter++)
                values.push_back(im.at< float >(rIter,cIter));
    std::sort(values.begin(),values.end());
    auto newEnd = std::unique(values.begin(),values.end());
    values.erase(newEnd,values.end());
    
    // compute an equalized image
    Mat showMe(im.rows,im.cols,DataType< uchar >::type);
    float oldQuant = 0;
    for(int qIter = 1; qIter <= 256; qIter++)
    {
        float quantile = static_cast< float >(qIter)/256;
      
        float thresh_low = values[oldQuant*(values.size() - 1)];
        float thresh_high = values[quantile*(values.size() - 1)];
        //printf("q = %f low = %f high = %f\n",quantile,thresh_low,thresh_high);
        for(int rIter = 0; rIter < im.rows; rIter++)
            for(int cIter = 0; cIter < im.cols; cIter++)
            {
                float curValue = im.at< float >(rIter,cIter);
                if(curValue <= thresh_high && curValue >= thresh_low)
                    showMe.at< uchar >(rIter,cIter) = qIter-1;
            }
        
        oldQuant = quantile;
    }
    
    imshow(winName,showMe);
}

Welcome to my academic homepage

headshot

Dec 18th, 2012Posted by James

 
The above figure contains the inverse HOG visualization of the HOG features in a picture of me. I am a first year PhD student at UC Irvine. I work in computer vision. I'm interested in how we might exploit temporal data (tracking) and depth data (e.g. Kinect) in semi-supervised learning to create effective detectors.