Real-time robotic grasping, supporting a subsequent precise object-in-hand operation task, is a priority target towards highly advanced autonomous systems. However, such an algorithm which can perform sufficiently-accurate grasping with time efficiency is yet to be found. This paper proposes a novel method with a 2-stage approach that combines a fast 2D object recognition using a deep neural network and a subsequent accurate and fast 6D pose estimation based on Point Pair Feature framework to form a real-time 3D object recognition and grasping solution capable of multi-object class scenes. The proposed solution has a potential to perform robustly on real-time applications, requiring both efficiency and accuracy. In order to validate our method, we conducted extensive and thorough experiments involving laborious preparation of our own dataset. The experiment results show that the proposed method scores 97.37% accuracy in 5cm5deg metric and 99.37% in Average Distance metric. Experiment results have shown an overall 62% relative improvement (5cm5deg metric) and 52.48% (Average Distance metric) by using the proposed method. Moreover, the pose estimation execution also showed an average improvement of 47.6% in running time. Finally, to illustrate the overall efficiency of the system in real-time operations, a pick-and-place robotic experiment is conducted and has shown a convincing success rate with 90% of accuracy. This experiment video is available at https://sites.google.com/view/dl-ppf6dpose/.
翻译:实时机器人捕捉,支持随后精确的实物在手操作任务,是高度先进的自主系统的一个优先目标。然而,尚未找到这种能够充分准确掌握时间效率的算法。本文件提出一种新型方法,其中采用两阶段方法,利用深层神经网络快速2D天体识别,并随后根据Point Pair 地貌框架进行准确和快速6D的估算,以形成一个实时3D天体物体识别和掌握能够多球类场景的解决方案。提议的解决方案有可能在实时应用程序上进行强有力的操作,既需要效率,也需要准确性。为了验证我们的方法,我们进行了广泛和彻底的实验,包括努力准备自己的数据集。实验结果表明,拟议的方法在5厘米5米5度度度和平均距离测量中准确度为97.37%,在99.37%的测算结果中,实验结果显示,使用拟议方法,62%(5厘米5米5度方位平方位)和52.48%(虚拟距离度度测量)的测算。此外,这种测算方法的测算执行过程也显示平均改进了47.6%/测算率,最后显示实际试验率进行了90度操作。