Attention-based models such as transformers have shown outstanding performance on dense prediction tasks, such as semantic segmentation, owing to their capability of capturing long-range dependency in an image. However, the benefit of transformers for monocular depth prediction has seldom been explored so far. This paper benchmarks various transformer-based models for the depth estimation task on an indoor NYUV2 dataset and an outdoor KITTI dataset. We propose a novel attention-based architecture, Depthformer for monocular depth estimation that uses multi-head self-attention to produce the multiscale feature maps, which are effectively combined by our proposed decoder network. We also propose a Transbins module that divides the depth range into bins whose center value is estimated adaptively per image. The final depth estimated is a linear combination of bin centers for each pixel. Transbins module takes advantage of the global receptive field using the transformer module in the encoding stage. Experimental results on NYUV2 and KITTI depth estimation benchmark demonstrate that our proposed method improves the state-of-the-art by 3.3%, and 3.3% respectively in terms of Root Mean Squared Error (RMSE). Code is available at https://github.com/ashutosh1807/Depthformer.git.
翻译:变压器等基于关注的模型显示,由于能够在图像中捕捉长距离依赖性,因此在高密度预测任务(如语义部分)方面表现突出,例如变压器等,由于具有在图像中捕捉长距离依赖性的能力,在语义分析模型方面表现突出。然而,迄今很少探讨变压器对单眼深度预测的好处。本文为室内NYUV2数据集和室外KITTI数据集的深度估算任务,规定了各种基于变压器的深度估算模型。我们提出了一个基于关注的新结构,即单眼深度估算的深度,利用多头自我测深仪来生成多尺度地貌图,这些图由我们提议的解调器网络有效结合。我们还提出了一个 Transbins模块,将深度范围分为每个图像中心值按不同估计的垃圾箱。最后的深度估计是每个像素的垃圾箱中心的线性组合。 Transbins 模块利用编码阶段的变压器模块的全球可容纳场。 NYUV2 和KIT180TI深度估算基准的实验结果显示,我们提出的方法改进了该图状图状图状图,由我们提议的解解解码网络网络网络网络网络网络网络网络网络网络网络网络系统将改进了3.3%和3.和3./SDIMSDMARMARMARM/SG/RV/RV/RV/RMIS/RV/RV/RV/RV/RMARGRV/RMAR。