Recent advances in multi-fingered robotic grasping have enabled fast 6-Degrees-Of-Freedom (DOF) single object grasping. Multi-finger grasping in cluttered scenes, on the other hand, remains mostly unexplored due to the added difficulty of reasoning over obstacles which greatly increases the computational time to generate high-quality collision-free grasps. In this work we address such limitations by introducing DDGC, a fast generative multi-finger grasp sampling method that can generate high quality grasps in cluttered scenes from a single RGB-D image. DDGC is built as a network that encodes scene information to produce coarse-to-fine collision-free grasp poses and configurations. We experimentally benchmark DDGC against the simulated-annealing planner in GraspIt! on 1200 simulated cluttered scenes and 7 real world scenes. The results show that DDGC outperforms the baseline on synthesizing high-quality grasps and removing clutter while being 5 times faster. This, in turn, opens up the door for using multi-finger grasps in practical applications which has so far been limited due to the excessive computation time needed by other methods.
翻译:在多手指机器人捕捉(DOF)方面最近的进展使得快速的6D-Degrees-Free-Free(DOF)单个对象能够捕捉到6D-Degrees-Free(DOF)的单个对象。另一方面,多手指捕捉(DDGC)是一个将现场信息编码成一个网络的网络,以生成粗到纤维的无碰撞捉摸器和配置。由于障碍的推理难度增加,使得计算时间大大增加,以产生高质量的无碰撞捕捉器。在这项工作中,我们通过引入DDDGC(DGC),快速基因化的多手指捕捉(DDGC)取样方法解决了这些局限性。DDGC(DGOF)是快速的,它可以将现场信息编码成一个网络,以生成粗略到纤维的无碰撞抓捉摸器和配置。我们实验性地将DDCGC(DGC)比在GraspIT(GRAP)的模拟捕捉摸场景场景和7个真实的世界场景来应对这些限制。结果显示,DDCGC(DGC)超越了将高质量捕捉捉捉捉捉捉摸和清除速度为5倍的底的基线。这又又又打开,这反过来打开,打开了使用其他方法,打开,打开,打开了有限的门门,使用了使用了其他方法,使用了非常多步法式的深的门。