Robots assisting us in factories or homes must learn to make use of objects as tools to perform tasks, e.g., a tray for carrying objects. We consider the problem of learning commonsense knowledge of when a tool may be useful and how its use may be composed with other tools to accomplish a high-level task instructed by a human. We introduce a novel neural model, termed TANGO, for predicting task-specific tool interactions, trained using demonstrations from human teachers instructing a virtual robot. TANGO encodes the world state, comprising objects and symbolic relationships between them, using a graph neural network. The model learns to attend over the scene using knowledge of the goal and the action history, finally decoding the symbolic action to execute. Crucially, we address generalization to unseen environments where some known tools are missing, but alternative unseen tools are present. We show that by augmenting the representation of the environment with pre-trained embeddings derived from a knowledge-base, the model can generalize effectively to novel environments. Experimental results show a 60.5-78.9% absolute improvement over the baseline in predicting successful symbolic plans in unseen settings for a simulated mobile manipulator.
翻译:协助我们在工厂或家中工作的机器人必须学会将物体用作执行任务的工具,例如,携带物体的托盘。我们考虑的问题是,学习关于工具何时有用以及如何将其使用与其他工具相结合的常识知识,以完成人类指示的高级任务。我们引入了一个新的神经模型,称为TANGO,用于预测任务特定工具的相互作用,经过培训,使用指导虚拟机器人的人类教师的演示来指导虚拟机器人。TANGO将世界状态编码,由物体和象征关系组成,使用图形神经网络。模型学习利用对目标和行动历史的了解在现场上观看,最终解码要执行的象征性行动。我们非常明确地将一些已知工具缺失的无形环境概括化,但有替代的不可见工具存在。我们表明,通过从知识库获得的经过预先训练的嵌入器,模型可以有效地将环境概括为新环境。实验结果显示,在预测一个模拟的移动操纵中成功象征性计划的基准方面,60.5-78.9 % 绝对改进了60.5-78.9 % 。