Since context modeling is critical for estimating depth from a single image, researchers put tremendous effort into obtaining global context. Many global manipulations are designed for traditional CNN-based architectures to overcome the locality of convolutions. Attention mechanisms or transformers originally designed for capturing long-range dependencies might be a better choice, but usually complicates architectures and could lead to a decrease in inference speed. In this work, we propose a pure transformer architecture called SideRT that can attain excellent predictions in real-time. In order to capture better global context, Cross-Scale Attention (CSA) and Multi-Scale Refinement (MSR) modules are designed to work collaboratively to fuse features of different scales efficiently. CSA modules focus on fusing features of high semantic similarities, while MSR modules aim to fuse features at corresponding positions. These two modules contain a few learnable parameters without convolutions, based on which a lightweight yet effective model is built. This architecture achieves state-of-the-art performances in real-time (51.3 FPS) and becomes much faster with a reasonable performance drop on a smaller backbone Swin-T (83.1 FPS). Furthermore, its performance surpasses the previous state-of-the-art by a large margin, improving AbsRel metric 6.9% on KITTI and 9.7% on NYU. To the best of our knowledge, this is the first work to show that transformer-based networks can attain state-of-the-art performance in real-time in the single image depth estimation field. Code will be made available soon.
翻译:由于背景模型对于从单一图像中估计深度至关重要,研究人员为获得全球背景付出了巨大的努力。许多全球操纵是为传统的CNN型CNN型架构设计的,以克服演化地点。最初设计用于捕捉长距离依赖性的注意机制或变压器可能是一个更好的选择,但通常会使结构复杂化,并可能导致推断速度下降。在这项工作中,我们提议一个称为SideRT的纯变压器结构,可以实时获得极佳的预测。为了更好地反映全球背景,CSA和多级调整模块的设计是为了协同配合不同规模的装配功能。CSA模块侧重于使用高距离依赖性的特征,而MSR模块则旨在将相应的位置的特性融合起来。这两个模块包含一些可学习的参数,而没有发生较轻但有效的模型。这一结构将实现实时最新的最新业绩(5.13 FPS)和多级改进模块模块的模块设计,在Swin-T型主干线上进行合理的性能下降速度要快得多。