A robust grip is key to successful manipulation and joining of work pieces involved in any industrial assembly process. Stability of a grip depends on geometric and physical properties of the object as well as the gripper itself. Current state-of-the-art algorithms can usually predict if a grip would fail. However, they are not able to predict the force at which the gripped object starts to slip, which is critical as the object might be subjected to external forces, e.g. when joining it with another object. This research project aims to develop a AI-based approach for a grip metric based on tactile sensor data capturing the physical interactions between gripper and object. Thus, the maximum force that can be applied to the object before it begins to slip should be predicted before manipulating the object. The RGB image of the contact surface between the object and gripper jaws obtained from GelSight tactile sensors during the initial phase of the grip should serve as a training input for the grip metric. To generate such a data set, a pull experiment is designed using a UR 5 robot. Performing these experiments in real life to populate the data set is time consuming since different object classes, geometries, material properties and surface textures need to be considered to enhance the robustness of the prediction algorithm. Hence, a simulation model of the experimental setup has been developed to both speed up and automate the data generation process. In this paper, the design of this digital twin and the accuracy of the synthetic data are presented. State-of-the-art image comparison algorithms show that the simulated RGB images of the contact surface match the experimental data. In addition, the maximum pull forces can be reproduced for different object classes and grip scenarios. As a result, the synthetically generated data can be further used to train the neural grip metric network.
翻译:稳健的抓取是成功操作和加入任何工业组装过程中所涉及的工作部件的关键。 稳健的抓取取决于对象的几何和物理特性以及抓取器本身。 目前最先进的算法通常可以预测抓取是否失败。 但是, 它们无法预测抓取对象开始滑落的力, 这是因为在抓取的初始阶段, 物体可能会受到外部力量的束缚, 例如当它与另一个对象合并时, 强烈的抓取是关键 。 这个研究项目旨在开发基于触摸感官数据的抓取测量仪的AI 方法, 捕捉到控制器与目标之间的物理相互作用。 因此, 在物体开始滑动之前, 可以应用到该物体的最大能量力。 在调试用该物体之前, RGB 的图像显示最大速度。 不同对象之间的接触表面和抓抓取的下部位图像应该成为控制度的训练输入。 在生成这样的数据集时, 将设计一个拉动实验, 使用一个 UR 5 机器人, 进行这些真实的生命实验, 来将真实的图像进行, 模拟, 模拟的 模拟的 将使用不同的实验, 数字的 数据 数据 将显示到生成的轨道 数据 。