This paper proposes a novel active visuo-tactile based methodology wherein the accurate estimation of the time-invariant SE(3) pose of objects is considered for autonomous robotic manipulators. The robot equipped with tactile sensors on the gripper is guided by a vision estimate to actively explore and localize the objects in the unknown workspace. The robot is capable of reasoning over multiple potential actions, and execute the action to maximize information gain to update the current belief of the object. We formulate the pose estimation process as a linear translation invariant quaternion filter (TIQF) by decoupling the estimation of translation and rotation and formulating the update and measurement model in linear form. We perform pose estimation sequentially on acquired measurements using very sparse point cloud as acquiring each measurement using tactile sensing is time consuming. Furthermore, our proposed method is computationally efficient to perform an exhaustive uncertainty-based active touch selection strategy in real-time without the need for trading information gain with execution time. We evaluated the performance of our approach extensively in simulation and by a robotic system.
翻译:本文提出一种新的活性相对触动基方法,其中考虑对自动机器人操纵器的物体在时间-变差 SE(3) 形状进行准确估计。 装有牵引器上触动传感器的机器人,以视野估计为指导,积极探索和定位未知工作空间中的物体。 机器人能够对多种潜在行动进行推理,并采取行动最大限度地增加所获信息,以更新当前对物体的信念。 我们通过脱钩对翻译和旋转的估算以及以线性形式制定更新和测量模型,将图像估测过程编成一个线性翻转(TIQF ) 。 我们使用极稀薄的云层进行测算,以获得每种测量,使用触动感是耗时的。 此外,我们提出的方法在计算上效率很高,可以实时进行详尽无遗的基于不确定性的积极触摸选择战略,而不需要用执行时间来交换信息。 我们用模拟和机器人系统广泛评估了我们的做法的绩效。