Sharing autonomy between robots and human operators could facilitate data collection of robotic task demonstrations to continuously improve learned models. Yet, the means to communicate intent and reason about the future are disparate between humans and robots. We present Assistive Tele-op, a virtual reality (VR) system for collecting robot task demonstrations that displays an autonomous trajectory forecast to communicate the robot's intent. As the robot moves, the user can switch between autonomous and manual control when desired. This allows users to collect task demonstrations with both a high success rate and with greater ease than manual teleoperation systems. Our system is powered by transformers, which can provide a window of potential states and actions far into the future -- with almost no added computation time. A key insight is that human intent can be injected at any location within the transformer sequence if the user decides that the model-predicted actions are inappropriate. At every time step, the user can (1) do nothing and allow autonomous operation to continue while observing the robot's future plan sequence, or (2) take over and momentarily prescribe a different set of actions to nudge the model back on track. We host the videos and other supplementary material at https://sites.google.com/view/assistive-teleop.
翻译:机器人和人类操作者之间的自主性共享可以促进机器人任务演示的数据收集,从而不断改进学习到的模型。然而,人类与机器人之间对未来的意图和理性的沟通手段是不同的。我们展示了一个收集机器人任务演示的虚拟现实(VR)系统,它展示了一个自动轨道预测来传达机器人的意图。随着机器人的移动,用户可以在需要时在自主和手动控制之间转换。它允许用户以高成功率和比手动电话操作系统更便捷的方式收集任务演示。我们的系统由变压器提供动力,它可以提供未来遥远的潜在状态和行动的窗口 -- -- 几乎没有增加计算时间。一个关键的洞察力是,如果用户认为模型预设的行动不合适,可以在变压变器序列内的任何地点注入人类意图。用户可以每一步都(1) 什么都不做,允许自动操作在观察机器人未来计划序列时继续进行,或者(2) 接受并立即规定一套不同的行动来调控模型回到轨道上。我们在 https://gregrevision/sgree/slimos.