Both, robot and hand-eye calibration haven been object to research for decades. While current approaches manage to precisely and robustly identify the parameters of a robot's kinematic model, they still rely on external devices, such as calibration objects, markers and/or external sensors. Instead of trying to fit the recorded measurements to a model of a known object, this paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space by a moving kinematic chain. As such, the presented framework allows robot calibration using nothing but an arbitrary eye-in-hand depth sensor, thus enabling fully autonomous self-calibration without any external tools. My new approach is utilizes a modified version of the Iterative Closest Point algorithm to run bundle adjustment on multiple 3D recordings estimating the optimal parameters of the kinematic model. A detailed evaluation of the system is shown on a real robot with various attached 3D sensors. The presented results show that the system reaches precision comparable to a dedicated external tracking system at a fraction of its cost.
翻译:数十年来,机器人和手眼校准避风港都是研究对象。 虽然当前方法设法精确和有力地识别机器人运动模型的参数, 但仍依赖外部设备, 如校准对象、标记和/或外部传感器。 本文没有试图将记录到的测量结果与已知物体模型相匹配, 而是将机器人校准视为一个离线的 SLAM 问题, 即扫描面通过移动运动运动链将扫描面与空间的固定点连接起来。 因此, 所展示的框架允许机器人校准只使用任意的手心深度传感器, 从而使得完全自主的自我校准, 没有任何外部工具。 我的新方法是使用一个修改版的迭代近点算算法来对多个3D 记录进行捆绑调整, 估计运动模型的最佳参数。 系统的详细评价是在一个真正机器人上显示的, 并附有各种 3D 传感器。 所展示的结果显示, 该系统的精确性可以与专用外部跟踪系统相比, 其成本只有一小部分。