Human-robot co-manipulation of soft materials, such as fabrics, composites, and sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. Estimating the deformation state of the co-manipulated material is one of the main challenges. Viable methods provide the indirect measure by calculating the human-robot relative distance. In this paper, we develop a data-driven model to estimate the deformation state of the material from a depth image through a Convolutional Neural Network (CNN). First, we define the deformation state of the material as the relative roto-translation from the current robot pose and a human grasping position. The model estimates the current deformation state through a Convolutional Neural Network, specifically a DenseNet-121 pretrained on ImageNet.The delta between the current and the desired deformation state is fed to the robot controller that outputs twist commands. The paper describes the developed approach to acquire, preprocess the dataset and train the model. The model is compared with the current state-of-the-art method based on a skeletal tracker from cameras. Results show that our approach achieves better performances and avoids the various drawbacks caused by using a skeletal tracker.Finally, we also studied the model performance according to different architectures and dataset dimensions to minimize the time required for dataset acquisition
翻译:人类机器人对软材料(如织物、合成材料和纸/纸板板)进行共同管理,这是一项具有挑战性的行动,提出了若干相关的工业应用。估计共同管理材料变形状态是主要挑战之一。可行的方法通过计算人体机器人相对距离提供间接计量。在本文中,我们开发了一个数据驱动模型,通过动态神经网络(CNN)从深度图像中估计材料变形状态。首先,我们将材料变形状态定义为从当前机器人表面的相对转折和人类掌握位置。模型通过动态神经网络估计当前变形状态,特别是在图像网络上预先训练的DenseNet-121。当前和理想变形状态的三角状态被反馈给输出调控的机器人控制器。本文描述了获取、预处理数据集和培训模型的发达方法。模型与目前状态模型相比,这是从当前机器人变形变形变形状态的模型,与当前机器人变形的变形状态是相对于当前机器人变形的变形状态,而当前变形状态则是从当前机器人变形的变形状态方法,也是从图像的变形过程,通过不同尾像机进行更精确的演化过程。