Robotic manipulation of cloth remains challenging for robotics due to the complex dynamics of the cloth, lack of a low-dimensional state representation, and self-occlusions. In contrast to previous model-based approaches that learn a pixel-based dynamics model or a compressed latent vector dynamics, we propose to learn a particle-based dynamics model from a partial point cloud observation. To overcome the challenges of partial observability, we infer which visible points are connected on the underlying cloth mesh. We then learn a dynamics model over this visible connectivity graph. Compared to previous learning-based approaches, our model poses strong inductive bias with its particle based representation for learning the underlying cloth physics; it is invariant to visual features; and the predictions can be more easily visualized. We show that our method greatly outperforms previous state-of-the-art model-based and model-free reinforcement learning methods in simulation. Furthermore, we demonstrate zero-shot sim-to-real transfer where we deploy the model trained in simulation on a Franka arm and show that the model can successfully smooth different types of cloth from crumpled configurations. Videos can be found on our project website.
翻译:由于布料的动态复杂,缺乏低维状态代表以及自我隔离,对机器人来说,对布布的机械操纵仍然具有挑战性。与以往学习像素基动态模型或压缩潜质矢量动力的模型方法相比,我们提议从局部云层观测中学习粒子基动态模型。为了克服部分可观察性的挑战,我们推论在底部布网中哪些可见点是相连的。我们随后在这个可见的连接图中学习一个动态模型。与以往的基于学习的方法相比,我们的模型在学习底部布质物理的粒子代表中形成了强烈的诱导偏向性;它对于视觉特征不易变;预测可以更加容易地进行视觉化。我们展示我们的方法大大优于先前的基于模型的状态和无模型的强化学习方法。此外,我们展示了零速的Sim-真实传输,我们在这里将模拟模型运用在弗兰卡手臂上,并显示模型能够成功地从弯曲的配置中平滑动不同类型布。我们的项目网站上可以找到视频。