Achieving safe and reliable autonomous driving relies greatly on the ability to achieve an accurate and robust perception system; however, this cannot be fully realized without precisely calibrated sensors. Environmental and operational conditions as well as improper maintenance can produce calibration errors inhibiting sensor fusion and, consequently, degrading the perception performance. Traditionally, sensor calibration is performed in a controlled environment with one or more known targets. Such a procedure can only be carried out in between drives and requires manual operation; a tedious task if needed to be conducted on a regular basis. This sparked a recent interest in online targetless methods, capable of yielding a set of geometric transformations based on perceived environmental features, however, the required redundancy in sensing modalities makes this task even more challenging, as the features captured by each modality and their distinctiveness may vary. We present a holistic approach to performing joint calibration of a camera-lidar-radar trio. Leveraging prior knowledge and physical properties of these sensing modalities together with semantic information, we propose two targetless calibration methods within a cost minimization framework once via direct online optimization, and second via self-supervised learning (SSL).
翻译:实现安全可靠的自主驾驶在很大程度上取决于实现准确和稳健的感知系统的能力;然而,如果没有精确校准的传感器,这不可能完全实现。环境和操作条件以及不当的维护可产生校准错误,抑制感应聚合,从而降低感知性能。传统上,传感器校准是在控制的环境中进行的,有一个或多个已知目标。这种程序只能在驱动器之间进行,需要手工操作;如果需要定期进行,则是一项烦琐的任务。这引起了人们对无目标在线方法的兴趣,能够产生一套基于感知环境特征的几何变换,然而,由于所需的遥感模式冗余,这项任务甚至更具挑战性,因为每种模式所捕捉的特征及其独特性可能各不相同。我们提出了一个全面的办法,共同校准照相机-激光雷达三角;利用这些遥感模式的先前知识和物理特性以及语义信息,我们提议在成本最小化框架内采用两种无目标校准方法,一次通过直接在线优化,第二次通过自我校准学习(SSL),在成本最小化框架内采用两种无目标的校准方法。