The ubiquitous multi-camera setup on modern autonomous vehicles provides an opportunity to construct surround-view depth. Existing methods, however, either perform independent monocular depth estimations on each camera or rely on computationally heavy self attention mechanisms. In this paper, we propose a novel guided attention architecture, EGA-Depth, which can improve both the efficiency and accuracy of self-supervised multi-camera depth estimation. More specifically, for each camera, we use its perspective view as the query to cross-reference its neighboring views to derive informative features for this camera view. This allows the model to perform attention only across views with considerable overlaps and avoid the costly computations of standard self-attention. Given its efficiency, EGA-Depth enables us to exploit higher-resolution visual features, leading to improved accuracy. Furthermore, EGA-Depth can incorporate more frames from previous time steps as it scales linearly w.r.t. the number of views and frames. Extensive experiments on two challenging autonomous driving benchmarks nuScenes and DDAD demonstrate the efficacy of our proposed EGA-Depth and show that it achieves the new state-of-the-art in self-supervised multi-camera depth estimation.
翻译:现代自主车辆上普遍使用多摄像头来构建环视深度图。然而,现有方法要么对每个摄像头执行独立的单目深度估计,要么依赖于计算密集的自注意力机制。本文提出了一种新颖的引导式注意力架构EGA-Depth,可以提高自监督多摄像头深度估计的效率和准确性。具体而言,对于每个摄像头,我们使用其透视视角作为查询,交叉参考其相邻视图以从中获得有用的特征。这使得模型仅在视图之间有重叠的情况下执行注意力,避免了标准自注意力的高成本计算。由于其高效性,EGA-Depth使我们能够利用更高分辨率的视觉特征,从而提高了准确性。此外,EGA-Depth还可以根据视图和帧数呈线性扩展,从而融合之前时间步中的更多帧。在两个具有挑战性的自主驾驶基准数据集nuScenes和DDAD上进行了全面的实验,结果表明我们提出的EGA-Depth方法有效,并在自监督多摄像头深度估计中达到了新的最高水平。