We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video. We integrate a learning-based depth prior, in the form of a convolutional neural network trained for single-image depth estimation, with geometric optimization, to estimate a smooth camera trajectory as well as detailed and stable depth reconstruction. Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details. In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations. Our method quantitatively outperforms state-of-the-arts on the Sintel benchmark for both depth and pose estimations and attains favorable qualitative results across diverse wild datasets.
翻译:我们提出一个算法,用于估计一致密度密度的深度地图和单层视频的摄像材料。我们采用了一种基于学习的深度。我们的方法与以前的做法不同,我们的方法不需要以摄像作为投入进行,而是要进行强有力的重建,以便进行具有挑战性的手提式手机捕捉,其中含有大量噪音、摇动、运动模糊和滚动的关机畸形。我们的方法在数量上超越了辛特尔基准上关于深度和提出估计并获得各种野生数据集的有利质量结果的状态。