We propose a novel approach to compute high-resolution (2048x1024 and higher) depths for panoramas that is significantly faster and qualitatively and qualitatively more accurate than the current state-of-the-art method (360MonoDepth). As traditional neural network-based methods have limitations in the output image sizes (up to 1024x512) due to GPU memory constraints, both 360MonoDepth and our method rely on stitching multiple perspective disparity or depth images to come out a unified panoramic depth map. However, to achieve globally consistent stitching, 360MonoDepth relied on solving extensive disparity map alignment and Poisson-based blending problems, leading to high computation time. Instead, we propose to use an existing panoramic depth map (computed in real-time by any panorama-based method) as the common target for the individual perspective depth maps to register to. This key idea made producing globally consistent stitching results from a straightforward task. Our experiments show that our method generates qualitatively better results than existing panorama-based methods, and further outperforms them quantitatively on datasets unseen by these methods.
翻译:我们提出了一种新颖的方法来计算高分辨率(2048x1024和更高)的全色深度(2048x1024),该方法比目前最先进的方法(360MonoDepth)要快得多、质量和质量上更准确得多。由于基于传统神经网络的方法由于GPU内存限制,其输出图像大小(高达1024x512)有限,因此,360MonoDepeh和我们的方法都依赖缝合多角度差异或深度图像,以得出一个统一的全色深度地图。然而,为了实现全球一致的缝合,360MonoDepeh依靠解决广泛的差异地图对齐和 Poisson混合问题,从而导致高计算时间。相反,我们提议使用现有的全色深度地图(用任何基于全色的方法实时绘制)作为个人视角深度地图登记的共同目标。这一关键想法使得全球一致的缝合结果能够从一个简单的任务中得出。我们的实验表明,我们的方法在质量上比现有的全色方法产生更好的结果,并且进一步超出这些方法在不可视的数据集上的量化结果。