Recently developed deep neural networks achieved state-of-the-art results in the subject of 6D object pose estimation for robot manipulation. However, those supervised deep learning methods require expensive annotated training data. Current methods for reducing those costs frequently use synthetic data from simulations, but rely on expert knowledge and suffer from the "domain gap" when shifting to the real world. Here, we present a proof of concept for a novel approach of autonomously generating annotated training data for 6D object pose estimation. This approach is designed for learning new objects in operational environments while requiring little interaction and no expertise on the part of the user. We evaluate our autonomous data generation approach in two grasping experiments, where we archive a similar grasping success rate as related work on a non autonomously generated data set.
翻译:最近开发的深层神经网络在 6D 对象主题方面取得了最新的最新结果,这为机器人操纵提供了估计。然而,那些受监督的深层学习方法需要昂贵的附加说明的培训数据。目前用来降低这些费用的方法经常使用模拟合成数据,但依赖专家知识,在转向现实世界时遭受“空白”的困扰。在这里,我们为自主生成6D 对象附加说明的培训数据提供了新概念的证明。这个方法的目的是在操作环境中学习新对象,而用户则不需要多少互动和专业知识。我们在两个捕捉实验中评估我们的自主数据生成方法,我们在这两个实验中将类似的掌握成功率作为非自主生成数据集的相关工作。