In this paper, we propose a novel method to learn internal feature representation models that are \textit{compatible} with previously learned ones. Compatible features enable for direct comparison of old and new learned features, allowing them to be used interchangeably over time. This eliminates the need for visual search systems to extract new features for all previously seen images in the gallery-set when sequentially upgrading the representation model. Extracting new features is typically quite expensive or infeasible in the case of very large gallery-sets and/or real time systems (i.e., face-recognition systems, social networks, life-long learning systems, robotics and surveillance systems). Our approach, called Compatible Representations via Stationarity (CoReS), achieves compatibility by encouraging stationarity to the learned representation model without relying on previously learned models. Stationarity allows features' statistical properties not to change under time shift so that the current learned features are inter-operable with the old ones. We evaluate single and sequential multi-model upgrading in growing large-scale training datasets and we show that our method improves the state-of-the-art in achieving compatible features by a large margin. In particular, upgrading ten times with training data taken from CASIA-WebFace and evaluating in Labeled Face in the Wild (LFW), we obtain a 49\% increase in measuring the average number of times compatibility is achieved, which is a 544\% relative improvement over previous state-of-the-art.
翻译:在本文中,我们提出了一种新方法来学习内部特征表示模型,这些模型与先前学习的模型是兼容的。兼容的特征使旧的和新的学习特征可以直接比较,使它们可以随时间交替使用。这在顺序升级表示模型时,消除了视觉搜索系统为相册集中的所有先前查看的图像提取新特征的需求。在非常大的相册集和/或实时系统(例如人脸识别系统、社交网络、终身学习系统、机器人和监视系统)中,提取新特征通常是相当昂贵的或不可行的。我们的方法被称为通过平稳性实现兼容性的表示(CoReS),通过鼓励表示模型具有平稳性来实现兼容性,而不依赖于先前学习的模型。平稳性让特征的统计性质在时间平移下不发生变化,以便当前学习的特征可以与旧特征相互操作。我们在不断增长的大规模训练数据集中评估了单个和顺序多模型升级,表明我们的方法在实现兼容性方面比现有技术的进展要大得多。特别是,在从CASIA-WebFace中获取的训练数据的十次升级和在Labeled Face in the Wild(LFW)测试的情况下,我们得到了49%的增长,这是相对于先前技术水平544%的相对增长。