We propose a novel, object-agnostic method for learning a universal policy for dexterous object grasping from realistic point cloud observations and proprioceptive information under a table-top setting, namely UniDexGrasp++. To address the challenge of learning the vision-based policy across thousands of object instances, we propose Geometry-aware Curriculum Learning (GeoCurriculum) and Geometry-aware iterative Generalist-Specialist Learning (GiGSL) which leverage the geometry feature of the task and significantly improve the generalizability. With our proposed techniques, our final policy shows universal dexterous grasping on thousands of object instances with 85.4% and 78.2% success rate on the train set and test set which outperforms the state-of-the-art baseline UniDexGrasp by 11.7% and 11.3%, respectively.
翻译:我们提出了一种新颖的面向桌面设置下,基于点云观察和本体感知信息学习灵巧物体抓取的通用方法,称为 UniDexGrasp++。为了解决跨千余种物体示例的视觉策略学习挑战,我们提出了基于几何感知课程的学习 (GeoCurriculum) 和基于几何感知迭代通用-专业化学习 (GiGSL) 两种方法,利用任务的几何特征,显著提高了泛化能力。通过我们提出的技术,我们的最终策略展示出普适的灵巧抓取能力,在训练集和测试集上成功率分别达到了 85.4% 和 78.2%。这超过了最先进的基线 UniDexGrasp 的 11.7% 和 11.3%。