Tendon-driven robots, a type of continuum robot, have the potential to reduce the invasiveness of surgery by enabling access to difficult-to-reach anatomical targets. In the future, the automation of surgical tasks for these robots may help reduce surgeon strain in the face of a rapidly growing population. However, directly encoding surgical tasks and their associated context for these robots is infeasible. In this work we take steps toward a system that is able to learn to successfully perform context-dependent surgical tasks by learning directly from a set of expert demonstrations. We present three models trained on the demonstrations conditioned on a vector encoding the context of the demonstration. We then use these models to plan and execute motions for the tendon-driven robot similar to the demonstrations for novel context not seen in the training set. We demonstrate the efficacy of our method on three surgery-inspired tasks.
翻译:Tendon 驱动的机器人是一种连续机器人,它有可能减少手术的侵入性,因为它能够接触难以到达的解剖目标。今后,这些机器人的手术任务自动化可能会帮助在人口迅速增长的情况下减少外科医生的紧张。然而,直接将外科任务及其相关背景编码为这些机器人是不可行的。在这项工作中,我们采取步骤,通过直接从一组专家演示中学习来学习如何成功完成根据背景进行的外科手术任务。我们展示了三种示范模型,这些模型的条件是对演示的背景进行矢量编码。我们随后使用这些模型来规划和执行对方向驱动机器人的动作,类似于在训练场上看不到的新环境的演示。我们展示了三种外科任务的方法的有效性。