1 Tsinghua University 2 Beihang University 3 Google Inc.
4 University of Southern California / USC Institute for Creative Technologies
5 Tsinghua-Berkeley Shenzhen Institute, Tsinghua University
We propose a light-weight yet highly robust method for real-time human performance capture based on a single depth camera and sparse inertial measurement units (IMUs). Our method combines non-rigid surface tracking and volumetric fusion to simultaneously reconstruct challenging motions, detailed geometries and the inner human body of a clothed subject. The proposed hybrid motion tracking algorithm and efficient per-frame sensor calibration technique enable non-rigid surface reconstruction for fast motions and challenging poses with severe occlusions. Significant fusion artifacts are reduced using a new confidence measurement for our adaptive TSDF-based fusion. The above contributions are mutually beneficial in our reconstruction system, which enable practical human performance capture that is real-time, robust, low-cost and easy to deploy. Experiments show that extremely challenging performances and loop closure problems can be handled successfully.
Fig 1. Illustration of HybridFusion pipeline.
Fig 2. Example results reconstructed by our system. In each grid, the left image is the color reference; the middle one is the fused surface geometry; and the right one is the inner body shape estimated by our system.
Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, Yebin Liu. "HybridFusion: Real-time Performance Capture Using a Single Depth Sensor and Sparse IMUs". European Conference on Computer Vision (ECCV) 2018
@InProceedings{Zheng2018HybridFusion,
author = {Zheng, Zerong and Yu, Tao and Li, Hao and Guo, Kaiwen and Dai, Qionghai and Fang, Lu and Liu,
Yebin},
title = {HybridFusion: Real-time Performance Capture Using a Single Depth Sensor and Sparse IMUs},
booktitle = {European Conference on Computer Vision (ECCV)},
month={Sept},
year={2018},
}