Recently, incremental learning for person re-identification receives increasing attention, which is considered a more practical setting in real-world applications. However, the existing works make the strong assumption that the cameras are fixed and the new-emerging data is class-disjoint from previous classes. In this paper, we focus on a new and more practical task, namely Camera Incremental person ReID (CIP-ReID). CIP-ReID requires ReID models to continuously learn informative representations without forgetting the previously learned ones only through the data from newly installed cameras. This is challenging as the new data only have local supervision in new cameras with no access to the old data due to privacy issues, and they may also contain persons seen by previous cameras. To address this problem, we propose a non-exemplar-based framework, named JPL-ReID. JPL-ReID first adopts a one-vs-all detector to discover persons who have been presented in previous cameras. To maintain learned representations, JPL-ReID utilizes a similarity distillation strategy with no previous training data available. Simultaneously, JPL-ReID is capable of learning new knowledge to improve the generalization ability using a Joint Plasticity Learning objective. The comprehensive experimental results on two datasets demonstrate that our proposed method significantly outperforms the comparative methods and can achieve state-of-the-art results with remarkable advantages.
翻译:最近,为重新确定身份而逐步进行的学习日益受到越来越多的关注,这被认为是现实世界应用中更加实际的设置。然而,现有工作有力地假定照相机是固定的,而新出现的数据与以前各年级不同。在本文件中,我们侧重于新的和更加实际的任务,即Camero Crecial person ReID(CIP-ReID)。CIP-ReID要求ReID模型不断学习信息化演示,而不会忘记以前通过新安装的相机获得的数据。这是具有挑战性的,因为新数据仅在新相机中进行当地监督,由于隐私问题无法查阅旧数据,而且它们也可能包含以前照相机所看到的人。为了解决这个问题,我们提议了一个非基于建筑的框架,称为JPL-ReID。JPL-ReID首先采用一五种探测器来发现以前摄像头上的人。为了保持学习的演示,JPL-ReID利用类似的蒸馏战略,而以前没有培训数据。同时,JL-ReID能够学习新的知识,以便用新的知识来改进我们的全面比较结果。联合研究方法,可以大大地展示我们所研订的双重的比较方法。