Single frame data contains finite information which limits the performance of the existing vision-based multi-camera 3D object detection paradigms. For fundamentally pushing the performance boundary in this area, a novel paradigm dubbed BEVDet4D is proposed to lift the scalable BEVDet paradigm from the spatial-only 3D space to the spatial-temporal 4D space. We upgrade the naive BEVDet framework with a few modifications just for fusing the feature from the previous frame with the corresponding one in the current frame. In this way, with negligible additional computing budget, we enable BEVDet4D to access the temporal cues by querying and comparing the two candidate features. Beyond this, we simplify the task of velocity prediction by removing the factors of ego-motion and time in the learning target. As a result, BEVDet4D with robust generalization performance reduces the velocity error by up to -62.9%. This makes the vision-based methods, for the first time, become comparable with those relied on LiDAR or radar in this aspect. On challenge benchmark nuScenes, we report a new record of 54.5% NDS with the high-performance configuration dubbed BEVDet4D-Base, which surpasses the previous leading method BEVDet-Base by +7.3% NDS. The source code is publicly available for further research at https://github.com/HuangJunJie2017/BEVDet .
翻译:单一框架数据包含限制现有基于视觉的多相机 3D 对象检测模式绩效的有限信息。 为了从根本上推动此区域的业绩边界, 提议了一个名为 BEVD4D 的新模式, 将可缩放的 BEVDet 模式从空间的3D 空间提升到空间时空 4D 空间。 我们升级了天真的 BEVD4D 框架, 只是将先前框架中的功能与当前框架中的对应框架混在一起, 做了一些修改。 这样, 有了微不足道的额外计算预算, 我们使得 BEVD4D 能够通过查询和比较两个候选特性来访问时间线索。 除此之外, 我们简化了速度预测任务, 消除了学习目标中的自我感动和时间因素。 结果就是, BEVDD4D4D 将速度错误降低到 -62.9 %。 这样, 以愿景为基础的方法首次与依赖LDAR 或雷达的方法相比, 比较起来, 。 关于挑战基准 nuSpeciateles BSeb-DDDDF 新的性版本, 通过前NEV4DDF 版本的NEVDDDDDDS 版本, 我们报告了NV4DBBSDBS- droviewd_DBDDDF_ 的新的版本。