We introduce a neural implicit representation for grasps of objects from multiple robotic hands. Different grasps across multiple robotic hands are encoded into a shared latent space. Each latent vector is learned to decode to the 3D shape of an object and the 3D shape of a robotic hand in a grasping pose in terms of the signed distance functions of the two 3D shapes. In addition, the distance metric in the latent space is learned to preserve the similarity between grasps across different robotic hands, where the similarity of grasps is defined according to contact regions of the robotic hands. This property enables our method to transfer grasps between different grippers including a human hand, and grasp transfer has the potential to share grasping skills between robots and enable robots to learn grasping skills from humans. Furthermore, the encoded signed distance functions of objects and grasps in our implicit representation can be used for 6D object pose estimation with grasping contact optimization from partial point clouds, which enables robotic grasping in the real world.
翻译:我们引入了从多个机器人手抓取物体的神经隐含代表。 多机器人手的不同抓取被编码成一个共同的潜在空间。 每个潜在矢量都学会解码一个物体的3D形状和机器人手的3D形状,用两个3D形状的签名距离功能来捕捉。此外, 隐性空间的距离测量可以保存不同机器人手的抓取之间的相似性, 不同机器人手的接触区域对握取的相似性进行了定义。 此属性使我们能够在包括人类手在内的不同抓抓抓器之间转移抓取。 抓取使机器人能够分享掌握技能并使机器人能够学习人类掌握技能。 此外, 我们隐性代表中的物体和抓取的编码签名距离功能可以用于6D对象的估算, 并用部分点云的抓取接触优化来进行估计, 从而使机器人能够在现实世界中捕捉到机器人。