The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from adversarial weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available.
翻译:三维感知系统在环境和传感器自然噪声下的稳健性对于安全关键应用至关重要。现有的大规模三维感知数据集通常含有经过精心清洗的数据。然而,这种配置无法反映出感知模型在部署阶段的可靠性。在本文中,我们提出了Robo3D,这是首个全面的基准,旨在研究面对真实环境中的自然噪声时,三维检测器和分割器的抗干扰性。具体来说,我们考虑了源于敌对天气条件、外部干扰和内部传感器故障的八种噪声类型。我们发现,尽管标准基准上已经逐步取得了有希望的结果,但最先进的3D感知模型仍然存在被损坏的风险。我们对数据表示、增强方案和训练策略的使用进行了重要观察,这些因素可能严重影响模型的性能。为了追求更好的鲁棒性,我们提出了一个密度不敏感的训练框架,以及一个简单的、灵活的体素化策略,以增强模型的抗扰性。我们希望我们的基准和方法可以激发未来的研究,设计更加鲁棒和可靠的三维感知模型。我们的鲁棒性基准套件现已公开发布。