Collaborative robots offer increased interaction capabilities at relatively low cost but in contrast to their industrial counterparts they inevitably lack precision. Moreover, in addition to the robots' own imperfect models, day-to-day operations entail various sources of errors that despite being small rapidly accumulate. This happens as tasks change and robots are re-programmed, often requiring time-consuming calibrations. These aspects strongly limit the application of collaborative robots in tasks demanding high precision (e.g. watch-making). We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception. Our approach leverages previous knowledge of object 3D models in combination with point cloud registration to efficiently extract relevant poses and compute corrective trajectories. This results in high-precision assembly behaviors. The approach is validated in a needle threading experiment, with a 150{\mu}m thread and a 300{\mu}m hole, and a USB insertion task using two 7-axis Panda robots.
翻译:合作机器人以相对较低的成本提供了更多的互动能力,但与工业机器人相比,它们不可避免地缺乏精确性。此外,除了机器人本身的不完善模型外,日常操作还带来各种错误来源,尽管这些错误是小的快速积累。这发生在任务变化和机器人重新编程时,往往需要花费时间校准。这些方面严重限制了协作机器人在要求高精度(例如制表)的任务中的应用。我们通过使用激光感测仪的双臂系统来测量利害对象之间的相对构成,并弥补机器人对立体的错误。我们的方法利用了目标3D模型的先前知识,结合点云登记来有效提取相关成形和编译修正轨迹。这导致高精度组装行为。这种方法在针线试验中得到了验证,针线线试验有150毫毫微线和300毫微米洞,而USB则使用两个7轴派熊机器人插入任务。