The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from adversarial weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available.
翻译:摘要:在安全关键应用中,3D感知系统在环境和传感器的自然干扰下的稳健性至关重要。现有的大规模3D感知数据集通常包含经过精心清洗的数据。然而,这种配置不能反映出感知模型在部署阶段的可靠性。在本研究中,我们提出Robo3D,这是第一个全面评估在真实环境中出现的自然干扰下3D检测器和分割器鲁棒性的基准测试。具体来说,我们考虑了八种污染类型,这些类型来源于对抗天气条件、外部干扰和内部传感器故障。我们发现,尽管已经在标准基准测试上取得了令人满意的结果,但最先进的3D感知模型面临遭受干扰的风险。我们对数据表示、增强方案和训练策略进行了关键观察,这些因素可能严重影响模型的性能。为了追求更好的鲁棒性,我们提出了一个密度无关的训练框架以及一个简单灵活的体素化策略,以增强模型的弹性。我们希望我们的基准测试和方法能够激发未来的研究,设计更加鲁棒可靠的3D感知模型。我们的鲁棒性基准套件已公开发布。