Robotic manipulation of slender objects is challenging, especially when the induced deformations are large and nonlinear. Traditionally, learning-based control approaches, e.g., imitation learning, have been used to tackle deformable material manipulation. Such approaches lack generality and often suffer critical failure from a simple switch of material, geometric, and/or environmental (e.g., friction) properties. In this article, we address a fundamental but difficult step of robotic origami: forming a predefined fold in paper with only a single manipulator. A data-driven framework combining physically-accurate simulation and machine learning is used to train deep neural network models capable of predicting the external forces induced on the paper given a grasp position. We frame the problem using scaling analysis, resulting in a control framework robust against material and geometric changes. Path planning is carried out over the generated manifold to produce robot manipulation trajectories optimized to prevent sliding. Furthermore, the inference speed of the trained model enables the incorporation of real-time visual feedback to achieve closed-loop sensorimotor control. Real-world experiments demonstrate that our framework can greatly improve robotic manipulation performance compared against natural paper folding strategies, even when manipulating paper objects of various materials and shapes.
翻译:对微粒物体的机械操纵具有挑战性,特别是当诱导变形是大型和非线性的情况下。传统上,以学习为基础的控制方法,例如模仿学习,被用来处理变形材料的操纵。这些方法缺乏普遍性,而且往往由于材料、几何和(或)环境(例如摩擦)特性的简单转换而严重失灵。在本篇文章中,我们处理一个基本但困难的机器人或纸质的一步:在纸上形成一个预先定义的折叠,只有单一的操纵器。一个数据驱动框架,将物理精确模拟和机器学习结合起来,用来培训能够预测纸张所引出的外部力量的深线性网络模型。我们用缩放分析来确定问题,形成一个能够抵御物质和几何变化的节制框架。路径规划在生成的管道上进行,产生机器人操纵轨迹的优化防止滑动。此外,经过培训的模型的推论速度使得实时视觉反馈得以结合,实现封闭式感官模控控控控控器。现实世界实验利用规模分析来设计问题,我们的框架能够大大改进各种机器人操纵的造型造型文件的形状。