Accurate extrinsic sensor calibration is essential for both autonomous vehicles and robots. Traditionally this is an involved process requiring calibration targets, known fiducial markers and is generally performed in a lab. Moreover, even a small change in the sensor layout requires recalibration. With the anticipated arrival of consumer autonomous vehicles, there is demand for a system which can do this automatically, after deployment and without specialist human expertise. To solve these limitations, we propose a flexible framework which can estimate extrinsic parameters without an explicit calibration stage, even for sensors with unknown scale. Our first contribution builds upon standard hand-eye calibration by jointly recovering scale. Our second contribution is that our system is made robust to imperfect and degenerate sensor data, by collecting independent sets of poses and automatically selecting those which are most ideal. We show that our approach's robustness is essential for the target scenario. Unlike previous approaches, ours runs in real time and constantly estimates the extrinsic transform. For both an ideal experimental setup and a real use case, comparison against these approaches shows that we outperform the state-of-the-art. Furthermore, we demonstrate that the recovered scale may be applied to the full trajectory, circumventing the need for scale estimation via sensor fusion.
翻译:精密的外部传感器校准对于自主飞行器和机器人都是必要的。 传统上,这是一个需要校准目标、 已知的标志和一般在实验室进行的过程。 此外, 即使是传感器布局的微小变化也需要重新调整。 随着消费自主车辆的预期到来, 也需要一个能够在部署后和没有专门的人的专门知识的情况下自动做到这一点的系统。 为了解决这些限制, 我们提议一个灵活的框架, 它可以在没有明确校准阶段的情况下估算外部参数, 甚至对于规模不明的传感器。 我们的第一个贡献建立在标准的手眼校准上, 通过联合恢复比例。 我们的第二个贡献是, 我们的系统通过独立收集各种配置和自动选择最理想的, 对不完善和退化的传感器数据变得强大。 我们表明, 我们的方法的稳健性对于目标情景至关重要。 与以前的方法不同, 我们的运行是实时的, 并不断估计外向变化。 对于理想的实验设置和实际使用案例, 与这些方法的比较表明, 我们比得过状态的光眼校准, 。 此外, 我们证明我们的系统对不完善和退化的传感器, 需要完全的升级。