Sharing information between connected and autonomous vehicles (CAVs) fundamentally improves the performance of collaborative object detection for self-driving. However, CAVs still have uncertainties on object detection due to practical challenges, which will affect the later modules in self-driving such as planning and control. Hence, uncertainty quantification is crucial for safety-critical systems such as CAVs. Our work is the first to estimate the uncertainty of collaborative object detection. We propose a novel uncertainty quantification method, called Double-M Quantification, which tailors a moving block bootstrap (MBB) algorithm with direct modeling of the multivariant Gaussian distribution of each corner of the bounding box. Our method captures both the epistemic uncertainty and aleatoric uncertainty with one inference pass based on the offline Double-M training process. And it can be used with different collaborative object detectors. Through experiments on the comprehensive collaborative perception dataset, we show that our Double-M method achieves more than 4X improvement on uncertainty score and more than 3% accuracy improvement, compared with the state-of-the-art uncertainty quantification methods. Our code is public on https://coperception.github.io/double-m-quantification.
翻译:连接和自主飞行器(CAVs)之间的信息交流从根本上改善了自驾驶协作物体探测的性能。然而,由于实际的挑战,CAV在物体探测方面仍有不确定性,这将影响规划和控制等自驾驶的后期模块。因此,不确定性量化对于安全关键系统,如CAVs等至关重要。我们的工作是第一个估计协作物体探测不确定性的新颖的不确定性量化方法。我们提出了称为双M量化的“双M量化”方法,该方法将移动区块靴(MBB)算法与捆绑框每个角落的多变量高斯斯分布直接建模。我们的方法既能捕捉到以离线双M培训过程为基础的迷幻剂的不确定性,又能捕捉到一种隐蔽性的不确定性。它可以用于不同的协作物体探测器。通过全面协作感知识数据集的实验,我们发现我们的双M方法在不确定性评分和精确度改进超过3 %以上,这与状态-艺术不确定性量化方法相比,我们的代码是公开的 https://copercivis-qivimation。我们的代码在 http://coqualization上是公开的。