We present a novel Learning from Demonstration (LfD) method, Deformable Manipulation from Demonstrations (DMfD), to solve deformable manipulation tasks using states or images as inputs, given expert demonstrations. Our method uses demonstrations in three different ways, and balances the trade-off between exploring the environment online and using guidance from experts to explore high dimensional spaces effectively. We test DMfD on a set of representative manipulation tasks for a 1-dimensional rope and a 2-dimensional cloth from the SoftGym suite of tasks, each with state and image observations. Our method exceeds baseline performance by up to 12.9% for state-based tasks and up to 33.44% on image-based tasks, with comparable or better robustness to randomness. Additionally, we create two challenging environments for folding a 2D cloth using image-based observations, and set a performance benchmark for them. We deploy DMfD on a real robot with a minimal loss in normalized performance during real-world execution compared to simulation (~6%). Source code is on github.com/uscresl/dmfd
翻译:我们提出了一个小说《从演示中学习》(LfD)方法,即从演示中变形操纵(DMfD),用状态或图像作为投入,在专家演示中解决变形操作任务。我们的方法用三种不同的方式使用演示,平衡在线探索环境与使用专家指导以有效探索高维空间之间的权衡。我们用一组具有代表性的一维绳子操作任务测试DMFD,用SoftGym任务组合中的二维布来测试,每个任务都有状态和图像观测。我们的方法超过基准性能12.9%,用基于图像的任务超过33.44%,与随机性相似或更强。此外,我们创造了两个挑战性环境,用基于图像的观察折叠二维布,并为它们设定了性能基准。我们把DMDFD放在一个真正的机器人上,与模拟相比,在现实世界执行过程中的正常性能损失最小(~6%)。源代码在 githhub.com/usrelsl/dmfdfdfd上。