Performing super-resolution of a depth image using the guidance from an RGB image is a problem that concerns several fields, such as robotics, medical imaging, and remote sensing. While deep learning methods have achieved good results in this problem, recent work highlighted the value of combining modern methods with more formal frameworks. In this work, we propose a novel approach which combines guided anisotropic diffusion with a deep convolutional network and advances the state of the art for guided depth super-resolution. The edge transferring/enhancing properties of the diffusion are boosted by the contextual reasoning capabilities of modern networks, and a strict adjustment step guarantees perfect adherence to the source image. We achieve unprecedented results in three commonly used benchmarks for guided depth super-resolution. The performance gain compared to other methods is the largest at larger scales, such as x32 scaling. Code for the proposed method will be made available to promote reproducibility of our results.
翻译:使用 RGB 图像的引导进行深度图像的超分辨率是一个涉及多个领域的问题,如机器人、医学成像和遥感。虽然深层学习方法在这个问题上取得了良好结果,但最近的工作突出了将现代方法与更正式的框架相结合的价值。在这项工作中,我们建议采用新颖的方法,将引导的厌食扩散与深演进网络结合起来,并推进导深层超分辨率的先进水平。现代网络的背景推理能力提升了扩散的边缘转移/增强特性,严格的调整步骤保证了源图像的完美合规性。我们在三种常用的引导深度超分辨率基准中取得了前所未有的成果。与其他方法相比,绩效收益在较大的尺度上是最大的,如x32比例。将提供拟议方法的代码,以促进我们结果的再生。