Recent Semantic SLAM methods combine classical geometry-based estimation with deep learning-based object detection or semantic segmentation. In this paper we evaluate the quality of semantic maps generated by state-of-the-art class- and instance-aware dense semantic SLAM algorithms whose codes are publicly available and explore the impacts both semantic segmentation and pose estimation have on the quality of semantic maps. We obtain these results by providing algorithms with ground-truth pose and/or semantic segmentation data available from simulated environments. We establish that semantic segmentation is the largest source of error through our experiments, dropping mAP and OMQ performance by up to 74.3% and 71.3% respectively.
翻译:最近的语义学 SLM 方法将经典几何估计与深层次的基于学习的物体探测或语义分解相结合。 在本文中,我们评估了由最先进的级和试知的密集的 SLAM 算法产生的语义图的质量,这些算法的代码是公开的,我们探讨了语义分解和对语义学地图质量的影响。我们通过提供从模拟环境中获得的具有地面真实面和/或语义分解数据的算法,取得了这些结果。我们确定,语义分解是通过我们的实验,分别将 mAP和OMQ的性能下降至74.3%和71.3%的最大误差源。