We propose a novel, object-agnostic method for learning a universal policy for dexterous object grasping from realistic point cloud observations and proprioceptive information under a table-top setting, namely UniDexGrasp++. To address the challenge of learning the vision-based policy across thousands of object instances, we propose Geometry-aware Curriculum Learning (GeoCurriculum) and Geometry-aware iterative Generalist-Specialist Learning (GiGSL) which leverage the geometry feature of the task and significantly improve the generalizability. With our proposed techniques, our final policy shows universal dexterous grasping on thousands of object instances with 85.4% and 78.2% success rate on the train set and test set which outperforms the state-of-the-art baseline UniDexGrasp by 11.7% and 11.3%, respectively.
翻译:我们提出了一种新颖的、与物体无关的方法,用于从现实点云观察和本体感知信息在桌面设置下学习灵巧物体抓握的通用策略,即UniDexGrasp++。为了解决在成千上万的物体实例中学习基于视觉的策略的挑战,我们提出了基于几何感知课程学习(GeoCurriculum)和几何感知迭代通用专家学习(GiGSL),它们利用任务的几何特征,显著提高了通用性。使用我们提出的技术,我们的最终策略在数千个物体实例上展示了通用的灵巧抓握,其训练集和测试集上的成功率分别为85.4%和78.2%,优于状态-of-the-art 基线UniDexGrasp,分别提高了11.7%和11.3%。