Limited by the cost and technology, the resolution of depth map collected by depth camera is often lower than that of its associated RGB camera. Although there have been many researches on RGB image super-resolution (SR), a major problem with depth map super-resolution is that there will be obvious jagged edges and excessive loss of details. To tackle these difficulties, in this work, we propose a multi-scale progressive fusion network for depth map SR, which possess an asymptotic structure to integrate hierarchical features in different domains. Given a low-resolution (LR) depth map and its associated high-resolution (HR) color image, We utilize two different branches to achieve multi-scale feature learning. Next, we propose a step-wise fusion strategy to restore the HR depth map. Finally, a multi-dimensional loss is introduced to constrain clear boundaries and details. Extensive experiments show that our proposed method produces improved results against state-of-the-art methods both qualitatively and quantitatively.
翻译:由于成本和技术的限制,深层照相机所收集的深度地图的分辨率往往低于其相关的 RGB 相片的分辨率。虽然对 RGB 图像超分辨率(SR) 进行了许多研究,但深度地图超分辨率的一个主要问题是,深度地图将存在明显的斜面和过多的细节损失。为了解决这些困难,我们提议在这项工作中建立一个深度摄影SR 的多尺度渐进融合网络,这个网络拥有将不同领域等级特征整合起来的无空间结构。鉴于低分辨率(LR) 深度地图及其相关的高分辨率(HR) 颜色图像,我们利用两个不同的分支实现多尺度特征学习。接下来,我们提出一个渐进式融合战略,以恢复HR深度地图。最后,引入了多维损失,以限制清晰的界限和细节。广泛的实验表明,我们提出的方法在质量和数量上都比最先进的方法产生更好的结果。