The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.
翻译:准确检测和捕捉透明天体具有挑战性, 但对于机器人来说意义重大 。 在此, 提议了一个透明天体在复杂背景和变异光条件下捕捉透明天体的视觉- 触觉融合框架, 包括定位检测、 触摸校准和视觉- 触觉融合分类。 首先, 提议了一个多层合成捕捉数据集生成方法, 配有高斯分布数据注释。 此外, 提议了一个名为 TGCNN 的新颖的捕捉网络, 用于捕捉定位检测, 显示合成和真实场景的良好结果 。 在触摸校准中, 设计了一个完全以人类捕捉为动力的电动网络触觉特征提取方法和一个基于中央位置的适应抓取策略, 将成功率提高36.7%, 而不是直接抓取。 此外, 提议了一个视觉- 触觉融合方法, 用于透明的天体分类, 提高分类的精确度 34% 。 拟议的框架将视觉和触摸的优势结合起来, 大大提高透明天体捕捉取效率 。