The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from adversarial weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available.
翻译:摘要:在安全关键的应用中,3D感知系统在环境和传感器的自然污染下的稳健性至关重要。现有的大规模3D感知数据集经常包含经过精心清洗的数据。然而,这种配置不能反映部署阶段感知模型的可靠性。在这项工作中,我们提出了Robo3D,这是首个综合基准,旨在针对真实世界环境中发生的自然污染对3D检测器和分割器的稳健性进行探究。具体而言,我们考虑了来自敌对气象条件,外部干扰和传感器内部故障的八种污染类型。我们发现,尽管在标准基准上已经逐步取得了有希望的结果,但是最先进的3D感知模型面临着易受污染攻击的风险。我们得出了关于使用数据表示,扩充方案和训练策略的关键观察,这些可能严重影响模型的性能。为了追求更好的鲁棒性,我们提出了一个密度不敏感的培训框架以及一个简单灵活的体素化策略,以提高模型的韧性。我们希望我们的基准和方法可以激发未来研究,设计更加强大和可靠的3D感知模型。我们的鲁棒性基准套件是公开可用的。