We formulate grasp learning as a neural field and present Neural Grasp Distance Fields (NGDF). Here, the input is a 6D pose of a robot end effector and output is a distance to a continuous manifold of valid grasps for an object. In contrast to current approaches that predict a set of discrete candidate grasps, the distance-based NGDF representation is easily interpreted as a cost, and minimizing this cost produces a successful grasp pose. This grasp distance cost can be incorporated directly into a trajectory optimizer for joint optimization with other costs such as trajectory smoothness and collision avoidance. During optimization, as the various costs are balanced and minimized, the grasp target is allowed to smoothly vary, as the learned grasp field is continuous. We evaluate NGDF on joint grasp and motion planning in simulation and the real world, outperforming baselines by 63% execution success while generalizing to unseen query poses and unseen object shapes. Project page: https://sites.google.com/view/neural-grasp-distance-fields.
翻译:我们将抓取学习表述为一个神经场,并提出了神经抓取距离场(NGDF)。其中,输入是机器人末端执行器的6D姿态,输出是到一个物体有效抓取的连续流形的距离。与当前预测一组离散候选抓取的方法不同,基于距离的NGDF表示很容易解释为成本,并最小化该成本产生成功的抓取姿态。这种抓取距离成本可以直接并入轨迹优化器进行联合优化到其他成本,例如轨迹的平滑性和防撞。在优化期间,随着各种成本的平衡和最小化,抓取目标被允许平滑变化,因为所学的抓取场是连续的。我们在模拟和实际世界中评估NGDF在联合抓取与运动规划上的性能,在未见过的查询姿态和未见过的物体形状上泛化,成功率比基线提高了63%。项目网页:https://sites.google.com/view/neural-grasp-distance-fields。