Tsinghua University, Hong Kong University of Science and Technology, Duke University
We present a multi-scale camera array to capture and synthesize gigapixel videos in an efficient way. Our acquisition setup contains a reference camera with a short-focus lens to get a large field-of-view video and a number of unstructured long-focus cameras to capture local-view details. Based on this new design, we propose an iterative feature matching and image warping method to independently warp each local-view video to the reference video. The key feature of the proposed algorithm is its robustness to and high accuracy for the huge resolution gap (more than 8× resolution gap between the reference and the local-view videos), camera parallaxes, complex scene appearances and color inconsistency among cameras. Experimental results show that the proposed multi-scale camera array and cross resolution video warping scheme is capable of generating seamless gigapixel video without the need of camera calibration and large overlapping area constraints between the localview cameras.
Fig 1. Pipeline of our cross resolution matching and warping algorithm. The red arrows denote the feature correspondence building process and the blue arrows denote the warping process.
Fig 2. Capture device. (a) our capturing device with hybrid cameras. (b) example data captured by our camera array.
Yuan, Xiaoyun, Lu Fang, Qionghai Dai, David J. Brady, and Yebin Liu. "Multiscale gigapixel video: A cross resolution image matching and warping approach." In 2017 IEEE International Conference on Computational Photography (ICCP), pp. 1-9. IEEE, 2017.
@inproceedings{yuan2017multiscale,
title={Multiscale gigapixel video: A cross resolution image matching and warping approach},
author={Yuan, Xiaoyun and Fang, Lu and Dai, Qionghai and Brady, David J and Liu, Yebin},
booktitle={2017 IEEE International Conference on Computational Photography (ICCP)},
pages={1--9},
year={2017},
organization={IEEE}
}