Robotic dexterous grasping is the first step to enable human-like dexterous object manipulation and thus a crucial robotic technology. However, dexterous grasping is much more under-explored than object grasping with parallel grippers, partially due to the lack of a large-scale dataset. In this work, we present a large-scale robotic dexterous grasp dataset, DexGraspNet, generated by our proposed highly efficient synthesis method that can be generally applied to any dexterous hand. Our method leverages a deeply accelerated differentiable force closure estimator and thus can efficiently and robustly synthesize stable and diverse grasps on a large scale. We choose ShadowHand and generate 1.32 million grasps for 5355 objects, covering more than 133 object categories and containing more than 200 diverse grasps for each object instance, with all grasps having been validated by the Isaac Gym simulator. Compared to the previous dataset from Liu et al. generated by GraspIt!, our dataset has not only more objects and grasps, but also higher diversity and quality. Via performing cross-dataset experiments, we show that training several algorithms of dexterous grasp synthesis on our dataset significantly outperforms training on the previous one. To access our data and code, including code for human and Allegro grasp synthesis, please visit our project page: https://pku-epic.github.io/DexGraspNet/.
翻译:机器人光学捕捉是人类相似的光学天体操控和关键机器人技术的第一步。 然而, 光学捕捉比平行抓抓器捕捉的物体要少得多, 部分原因是缺少大型的数据集。 在这项工作中, 我们展示了一个大型机器人光学捕捉数据集 DexGraspNet, 由我们提议的高效合成方法生成, 可以普遍适用于任何光学手。 我们的方法利用了一个非常加速的不同力量关闭估计器, 从而能够高效和有力地合成大尺度的稳定和多样化的捕捉器。 我们选择了暗影Hand, 并为5355个对象生成了132万个抓器, 覆盖了133多个对象类别, 每个对象都包含200多个不同的抓手, 所有抓手都得到了Isaac Gym 模拟仪的验证。 比较了GraspitIt 生成的Li 和Li 。 我们的数据集不仅有更多的对象和掌握, 而且还能够高效和更加精锐地合成稳定和合成稳定和高质量的稳定。 我们的图像分析了我们之前的模型, 的模型分析了我们之前的模型, 。</s>