We address the problem of goal-directed cloth manipulation, a challenging task due to the deformability of cloth. Our insight is that optical flow, a technique normally used for motion estimation in video, can also provide an effective representation for corresponding cloth poses across observation and goal images. We introduce FabricFlowNet (FFN), a cloth manipulation policy that leverages flow as both an input and as an action representation to improve performance. FabricFlowNet also elegantly switches between bimanual and single-arm actions based on the desired goal. We show that FabricFlowNet significantly outperforms state-of-the-art model-free and model-based cloth manipulation policies that take image input. We also present real-world experiments on a bimanual system, demonstrating effective sim-to-real transfer. Finally, we show that our method generalizes when trained on a single square cloth to other cloth shapes, such as T-shirts and rectangular cloths. Video and other supplementary materials are available at: https://sites.google.com/view/fabricflownet.
翻译:我们处理的是定向布料操纵问题,这是布料变形造成的一项具有挑战性的任务。我们的洞察力是光学流(光学流,通常用于在视频中进行运动估测)也能为观测和目标图像中相应的布质提供有效代表。我们引入了FabricFlowNet(FFN)这一布料操纵政策,它既作为投入,又作为提高性能的行动代表,将流动作为杠杆。FabricFlowNet还根据预期目标在双体和单臂行动之间进行优雅的切换。我们显示,FabricFlowNet大大超越了采用图像输入的无型和基于模型的现代布料操纵政策。我们还在双体系统上展示了真实世界的实验,展示了有效的模拟到真实的转换。最后,我们展示了我们的方法,即从单方布料培训到其他布质形状,如T恤衫和矩形布布布等,我们的方法是通用的。视频和其他补充材料见:https://sites.gogle.com.com/view/fabriccfrlownetnet。