Collaborative 3D object detection exploits information exchange among multiple agents to enhance accuracy of object detection in presence of sensor impairments such as occlusion. However, in practice, pose estimation errors due to imperfect localization would cause spatial message misalignment and significantly reduce the performance of collaboration. To alleviate adverse impacts of pose errors, we propose CoAlign, a novel hybrid collaboration framework that is robust to unknown pose errors. The proposed solution relies on a novel agent-object pose graph modeling to enhance pose consistency among collaborating agents. Furthermore, we adopt a multi-scale data fusion strategy to aggregate intermediate features at multiple spatial resolutions. Comparing with previous works, which require ground-truth pose for training supervision, our proposed CoAlign is more practical since it doesn't require any ground-truth pose supervision in the training and makes no specific assumptions on pose errors. Extensive evaluation of the proposed method is carried out on multiple datasets, certifying that CoAlign significantly reduce relative localization error and achieving the state of art detection performance when pose errors exist. Code are made available for the use of the research community at https://github.com/yifanlu0227/CoAlign.
翻译:合作3D物体探测利用多种物剂之间的信息交流,提高物体探测在诸如封闭等感官障碍情况下的准确性。然而,在实践中,由于不完善的定位造成估算错误,造成估算错误,导致空间信息不匹配,并大大降低协作绩效。为了减轻构成错误的不利影响,我们提议CoAlign,一个对未知错误强健的新颖混合协作框架。拟议解决方案依靠一个新型物剂-对象构成图形模型,以加强协作者之间的一致性。此外,我们还采用一种多尺度的数据聚合战略,以汇总多个空间分辨率的中间特征。与以往的工程相比,需要地面真相作为培训监督的依据,我们提议的CoAlign比较实用,因为它不需要任何地面真相对培训进行监督,也没有对构成错误的具体假设。对拟议方法的大规模评估是在多个数据集上进行的,证明CoAlign显著减少相对的本地化错误,并在出现错误时实现艺术探测性能。在https://github/Co02/yipan.org/yalusulu 上,为研究界提供了使用守则。