We propose a general self-supervised learning approach for spatial perception tasks, such as estimating the pose of an object relative to the robot, from onboard sensor readings. The model is learned from training episodes, by relying on: a continuous state estimate, possibly inaccurate and affected by odometry drift; and a detector, that sporadically provides supervision about the target pose. We demonstrate the general approach in three different concrete scenarios: a simulated robot arm that visually estimates the pose of an object of interest; a small differential drive robot using 7 infrared sensors to localize a nearby wall; an omnidirectional mobile robot that localizes itself in an environment from camera images. Quantitative results show that the approach works well in all three scenarios, and that explicitly accounting for uncertainty yields statistically significant performance improvements.
翻译:我们建议对空间感知任务采取一般的自我监督学习方法,例如从机载传感器读数中估算一个物体相对于机器人的外形。模型是从训练中学习的,依靠的是:连续的状态估计,可能不准确,并受到odomism漂移的影响;以及偶尔对目标外形进行监督的探测器。我们用三种不同的具体假设展示了一般方法:模拟机器人臂,可视地估计一个受关注对象的外形;小型差异驱动机器人,使用7个红外传感器将附近一堵墙本地化;无线移动机器人,从相机图像中将自己定位在环境中。定量结果显示,该方法在所有三种情景中都运作良好,明确计算不确定因素可以产生具有统计意义的显著性能改进。