Light-weight time-of-flight (ToF) depth sensors are small, cheap, low-energy and have been massively deployed on mobile devices for the purposes like autofocus, obstacle detection, etc. However, due to their specific measurements (depth distribution in a region instead of the depth value at a certain pixel) and extremely low resolution, they are insufficient for applications requiring high-fidelity depth such as 3D reconstruction. In this paper, we propose DELTAR, a novel method to empower light-weight ToF sensors with the capability of measuring high resolution and accurate depth by cooperating with a color image. As the core of DELTAR, a feature extractor customized for depth distribution and an attention-based neural architecture is proposed to fuse the information from the color and ToF domain efficiently. To evaluate our system in real-world scenarios, we design a data collection device and propose a new approach to calibrate the RGB camera and ToF sensor. Experiments show that our method produces more accurate depth than existing frameworks designed for depth completion and depth super-resolution and achieves on par performance with a commodity-level RGB-D sensor. Code and data are available at https://zju3dv.github.io/deltar/.
翻译:然而,由于它们的具体测量(在一个区域进行深度分布,而不是某个像素的深度值)和极低分辨率,它们不足以应用于需要高度忠诚深度的应用,如3D重建等。在本文件中,我们提议DELTAR,这是赋予轻度的TOR传感器以权力的新方法,通过与彩色图像合作测量高分辨率和准确深度的能力。作为DELTAR的核心,建议为深度分布定制的地物提取器和基于注意的神经结构,以将颜色和F域的信息有效结合起来。为了在现实世界情景中评估我们的系统,我们设计了一个数据收集装置,并提出了调整RGB相机和ToF传感器的新方法。实验表明,我们的方法比为深度完成和深度超分辨率设计的现有框架更精确的深度,并实现了与商品级RGB-D传感器(RGB-D)传感器(MD)/Disquod的数据。