Visual-inertial sensors have a wide range of applications in robotics. However, good performance often requires different sophisticated motion routines to accurately calibrate camera intrinsics and inter-sensor extrinsics. This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection for calibrating intrinsics and extrinsics jointly. Our approach models the calibration process compactly using model-free deep reinforcement learning to derive a policy that guides the motions of a robotic arm holding the sensor to efficiently collect measurements that can be used for both camera intrinsic calibration and camera-IMU extrinsic calibration. Given the current pose and collected measurements, the learned policy generates the subsequent transformation that optimizes sensor calibration accuracy. The evaluations in simulation and on a real robotic system show that our learned policy generates favorable motion trajectories and collects enough measurements efficiently that yield the desired intrinsics and extrinsics with short path lengths. In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
翻译:视觉神经传感器在机器人中有着广泛的应用。 但是,良好的性能往往需要不同的尖端运动常规,以精确校准相机的内在和感应器的外表。 这项工作是一种新颖的配方,以学习机器人臂上将执行的运动政策,用于自动收集数据,以共同校准内在和外表。 我们的方法模型是校准过程,紧凑地使用无模型的深层强化学习,以得出一项指导机器人臂抓住传感器的运动的政策,以有效收集可用于相机内在校准和相机-IMU外部校准的测量数据。根据目前的外观和收集的测量数据,所学的政策产生随后的变异,优化传感器校准精确度。 模拟和真正的机器人系统的评价表明,我们所学的政策产生了有利的运动轨迹,并收集了足够有效的测量数据,产生理想的内置和短路长的外形。 在模拟中,我们能够进行10倍的校准速度比手制政策快10倍的校准,该校准速度将转换为实际世界速度超过人类专家的3倍。