Performing super-resolution of a depth image using the guidance from an RGB image is a problem that concerns several fields, such as robotics, medical imaging, and remote sensing. While deep learning methods have achieved good results in this problem, recent work highlighted the value of combining modern methods with more formal frameworks. In this work, we propose a novel approach which combines guided anisotropic diffusion with a deep convolutional network and advances the state of the art for guided depth super-resolution. The edge transferring/enhancing properties of the diffusion are boosted by the contextual reasoning capabilities of modern networks, and a strict adjustment step guarantees perfect adherence to the source image. We achieve unprecedented results in three commonly used benchmarks for guided depth super-resolution. The performance gain compared to other methods is the largest at larger scales, such as x32 scaling. Code (https://github.com/prs-eth/Diffusion-Super-Resolution) for the proposed method is available to promote reproducibility of our results.
翻译:翻译后的摘要:
利用RGB图像引导深度图像进行超分辨率问题是一项涉及到多个领域(如机器人技术、医学成像和遥感技术)的问题。虽然深度学习方法在这个问题上取得了较好的结果,但最近的研究强调了将现代方法与更形式化的框架相结合的价值。在本文中,我们提出了一种新颖的方法,将引导各向异性扩散与深度卷积网络相结合,推动了引导深度超分辨率的最新进展。扩散的边缘转移/增强特性受到现代网络的上下文推理能力的增强,并且严格的调整步骤保证了对源图像的完美粘合。我们在三个常用的引导深度超分辨率基准测试中取得了前所未有的结果。与其他方法相比,我们的性能提升在更大的比例下(例如32倍放大)是最大的。可公开获取我们提出方法的代码(https://github.com/prs-eth/Diffusion-Super-Resolution),以促进我们结果的可复现性。