Tsinghua University Texas A&M University
This paper describes a new method for acquiring physically realis- tic hand manipulation data from multiple video streams. The key idea of our approach is to introduce a composite motion control to simultaneously model hand articulation, object movement, and sub- tle interaction between the hand and object. We formulate video- based hand manipulation capture in an optimization framework by maximizing the consistency between the simulated motion and the observed image data. We search an optimal motion control that drives the simulation to best match the observed image data. We demonstrate the effectiveness of our approach by capturing a wide range of high-fidelity dexterous manipulation data. We show the power of our recovered motion controllers by adapting the captured motion data to new objects with different properties. The system achieves superior performance against alternative methods such as marker-based motion capture and kinematic hand motion tracking.
Fig 1. Modeling high-fidelity dexterous manipulation data from videos: (top) observed image data; (bottom) reconstructed motion.
Fig 2. Retarget captured motion data to new objects with different geometries. From left to right, we show the original image data, the reconstructed manipulation data, and the retargeted motions for grasping and manipulating three different objects.
Wang, Yangang, Jianyuan Min, Jianjie Zhang, Yebin Liu, Feng Xu, Qionghai Dai, and Jinxiang Chai. "Video-based hand manipulation capture through composite motion control." ACM Transactions on Graphics (TOG) 32, no. 4 (2013): 1-14.
@article{wang2013video,
title={Video-based hand manipulation capture through composite motion control},
author={Wang, Yangang and Min, Jianyuan and Zhang, Jianjie and Liu, Yebin and Xu, Feng and Dai, Qionghai and Chai, Jinxiang},
journal={ACM Transactions on Graphics (TOG)},
volume={32},
number={4},
pages={1--14},
year={2013},
publisher={ACM New York, NY, USA}
}