Recent LSS-based multi-view 3D object detection has made tremendous progress, by processing the features in Brid-Eye-View (BEV) via the convolutional detector. However, the typical convolution ignores the radial symmetry of the BEV features and increases the difficulty of the detector optimization. To preserve the inherent property of the BEV features and ease the optimization, we propose an azimuth-equivariant convolution (AeConv) and an azimuth-equivariant anchor. The sampling grid of AeConv is always in the radial direction, thus it can learn azimuth-invariant BEV features. The proposed anchor enables the detection head to learn predicting azimuth-irrelevant targets. In addition, we introduce a camera-decoupled virtual depth to unify the depth prediction for the images with different camera intrinsic parameters. The resultant detector is dubbed Azimuth-equivariant Detector (AeDet). Extensive experiments are conducted on nuScenes, and AeDet achieves a 62.0% NDS, surpassing the recent multi-view 3D object detectors such as PETRv2 (58.2% NDS) and BEVDepth (60.0% NDS) by a large margin. Project page: https://fcjian.github.io/aedet.
翻译:最近基于 LSS 的多视图 3D 对象探测取得了巨大进展, 通过 Convolution 探测器处理 Brid- Eye- View (BEV) 的特性。 但是, 典型的 convolution 忽略了 BEV 特性的辐射对称, 增加了探测器优化的难度。 为了保存 BEV 特性的固有属性, 并方便优化, 我们提出一个对齐- QQevarient contraction (AeConv) 和对齐- Qevariant 锚。 Ae Conv( BEVE) 的取样网总是在辐射方向上, 从而可以学习 azumuth- 异性 BEV 特性。 提议的定位使探测头能够学习如何预测与 BEVEV 有关的目标。 此外, 我们引入了摄像破解的虚拟深度, 以统一图像的深度预测, 并使用不同的相机内在参数。 生成的探测器是 Azimmumuth- Qevariant 探测器。 。 在 nual views 上进行广泛的实验, 3DSDS 。 (N. 0DV) 。