1 Tsinghua University 2 Beihang University 3 Max-Planck-Institut für Informatik
This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the incoming frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.
Fig 1. The pipeline of our system.
Fig 2. System setup and live reconstruction results.
Fig 3. Example results reconstructed by our system.
Tao Yu, Zerong Zheng, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Gerard Pons-Moll, Yebin Liu. "SimulCap : Single-View Human Performance Capture with Cloth Simulation". IEEE CVPR 2019
@InProceedings{Yu2019SimulCap,
author = {Yu, Tao and Zheng, Zerong and Zhong, Yuan and Zhao, Jianhui and Dai, Qionghai and Pons-Moll, Gerard and Liu, Yebin},
title = {SimulCap : Single-View Human Performance Capture with Cloth Simulation},
booktitle = {The IEEE International Conference on Computer Vision and Pattern Recognition(CVPR)},
month={June},
year={2019},
publisher={IEEE},
}