Visual perception tasks often require vast amounts of labelled data, including 3D poses and image space segmentation masks. The process of creating such training data sets can prove difficult or time-intensive to scale up to efficacy for general use. Consider the task of pose estimation for rigid objects. Deep neural network based approaches have shown good performance when trained on large, public datasets. However, adapting these networks for other novel objects, or fine-tuning existing models for different environments, requires significant time investment to generate newly labelled instances. Towards this end, we propose ProgressLabeller as a method for more efficiently generating large amounts of 6D pose training data from color images sequences for custom scenes in a scalable manner. ProgressLabeller is intended to also support transparent or translucent objects, for which the previous methods based on depth dense reconstruction will fail. We demonstrate the effectiveness of ProgressLabeller by rapidly create a dataset of over 1M samples with which we fine-tune a state-of-the-art pose estimation network in order to markedly improve the downstream robotic grasp success rates. ProgressLabeller is open-source at https://github.com/huijieZH/ProgressLabeller.
翻译:视觉观察任务往往需要大量贴标签的数据,包括3D成像和图像空间分割面罩。创建这类培训数据集的过程可能证明难以或需要时间来扩大规模,以达到一般用途的效果。考虑对僵硬物体进行估计的任务。深神经网络方法在对大型公共数据集进行培训时表现良好。然而,为其他新对象调整这些网络,或为不同环境微调现有模型,需要大量时间投资以产生新贴标签的例子。为此,我们提议进步实验室作为一种方法,以便以可缩放的方式从定制场景的彩色图像序列中更有效地生成大量6D成像序列的培训数据。进步实验室还打算支持透明或透明对象,而以前基于深度密集重建的方法将失败。我们通过迅速创建超过1M样本的数据集来显示进步实验室的有效性,我们据此调整一个状态和艺术构成估计网络,以显著提高下游机器人获取成功率。进步实验室是https://githhububeller.com/huijiZHe的开放源。