Grasp pose estimation is an important issue for robots to interact with the real world. However, most of existing methods require exact 3D object models available beforehand or a large amount of grasp annotations for training. To avoid these problems, we propose TransGrasp, a category-level grasp pose estimation method that predicts grasp poses of a category of objects by labeling only one object instance. Specifically, we perform grasp pose transfer across a category of objects based on their shape correspondences and propose a grasp pose refinement module to further fine-tune grasp pose of grippers so as to ensure successful grasps. Experiments demonstrate the effectiveness of our method on achieving high-quality grasps with the transferred grasp poses. Our code is available at https://github.com/yanjh97/TransGrasp.
翻译:格拉斯普(TransGrasp)的估测是机器人与真实世界互动的一个重要问题,然而,大多数现有方法都需要事先提供精确的三维对象模型或大量的掌握说明来进行培训。为了避免这些问题,我们提议采用LansGrasp(TranGrasp)这一分类级的估测方法,通过只标出一个物体实例来预测掌握某类物体的构成。具体地说,我们根据一个物体的形状对等进行掌握,并提议一个精细化模块,以进一步精细地掌握控制器的姿势,从而确保成功掌握。实验表明我们用所转让的握力实现高质量捕捉的方法的有效性。我们的代码可在https://github.com/yanjh97/TransGrasp上查阅。