The visual system of a robot has different requirements depending on the application: it may require high accuracy or reliability, be constrained by limited resources or need fast adaptation to dynamically changing environments. In this work, we focus on the instance segmentation task and provide a comprehensive study of different techniques that allow adapting an object segmentation model in presence of novel objects or different domains. We propose a pipeline for fast instance segmentation learning designed for robotic applications where data come in stream. It is based on an hybrid method leveraging on a pre-trained CNN for feature extraction and fast-to-train Kernel-based classifiers. We also propose a training protocol that allows to shorten the training time by performing feature extraction during the data acquisition. We benchmark the proposed pipeline on two robotics datasets and we deploy it on a real robot, i.e. the iCub humanoid. To this aim, we adapt our method to an incremental setting in which novel objects are learned on-line by the robot. The code to reproduce the experiments is publicly available on GitHub.
翻译:机器人的视觉系统根据应用情况有不同的要求:它可能需要高精度或可靠性,受到有限资源的限制,或需要快速适应动态变化的环境。在这项工作中,我们侧重于实例分割任务,并对不同技术进行全面研究,以便能够在有新物体或不同领域的情况下对物体分割模型进行调整。我们建议为数据流的机器人应用设计一个快速实例分割学习管道。它基于一种混合方法,利用预先训练的CNN进行特征提取和快速到塔内尔核心分类。我们还提议了一项培训协议,通过在数据采集过程中进行特征提取来缩短培训时间。我们把拟议中的管道设在两个机器人数据集上,并将它部署在真正的机器人上,即iCub人形上。为了这个目的,我们调整我们的方法,使之适应机器人在线学习新物体的渐进环境。在GitHub上复制实验的代码是公开的。