The objective of this paper is to learn context- and depth-aware feature representation to solve the problem of monocular 3D object detection. We make following contributions: (i) rather than appealing to the complicated pseudo-LiDAR based approach, we propose a depth-conditioned dynamic message propagation (DDMP) network to effectively integrate the multi-scale depth information with the image context;(ii) this is achieved by first adaptively sampling context-aware nodes in the image context and then dynamically predicting hybrid depth-dependent filter weights and affinity matrices for propagating information; (iii) by augmenting a center-aware depth encoding (CDE) task, our method successfully alleviates the inaccurate depth prior; (iv) we thoroughly demonstrate the effectiveness of our proposed approach and show state-of-the-art results among the monocular-based approaches on the KITTI benchmark dataset. Particularly, we rank $1^{st}$ in the highly competitive KITTI monocular 3D object detection track on the submission day (November 16th, 2020). Code and models are released at \url{https://github.com/fudan-zvg/DDMP}
翻译:本文的目的是学习上下文和深深觉地貌特征代表,以解决单眼三维天体探测问题,我们作出以下贡献:(一)我们不呼吁复杂的伪LiDAR方法,而是建议一个深度条件动态信息传播网络,以便有效地将多尺度深度信息与图像环境结合起来;(二)这是通过在图像背景下首先适应性地取样背景-感知节点实现的,然后动态地预测基于深度的混合过滤权重和传播信息的亲近性矩阵;(三)通过增加中度深度编码(CDE)任务,我们的方法成功地减轻了先前不准确的深度;(四)我们透彻地展示了我们拟议方法的有效性,并展示了KITTI基准数据集基于单体的方法中的最新结果。 特别是,我们在提交日(2020年11月16日)高竞争力的KITTI单体3D物体探测轨道上将1美元评为美元。