Lifelong object re-identification incrementally learns from a stream of re-identification tasks. The objective is to learn a representation that can be applied to all tasks and that generalizes to previously unseen re-identification tasks. The main challenge is that at inference time the representation must generalize to previously unseen identities. To address this problem, we apply continual meta metric learning to lifelong object re-identification. To prevent forgetting of previous tasks, we use knowledge distillation and explore the roles of positive and negative pairs. Based on our observation that the distillation and metric losses are antagonistic, we propose to remove positive pairs from distillation to robustify model updates. Our method, called Distillation without Positive Pairs (DwoPP), is evaluated on extensive intra-domain experiments on person and vehicle re-identification datasets, as well as inter-domain experiments on the LReID benchmark. Our experiments demonstrate that DwoPP significantly outperforms the state-of-the-art. The code is here: https://github.com/wangkai930418/DwoPP_code
翻译:终身对象的再识别将逐渐从一系列再识别任务中学习。 目标是学习一种可以适用于所有任务且可以概括到先前不为人知的再识别任务的表达方式。 主要的挑战在于, 在推论时间, 表达方式必须概括到先前不为人知的身份 。 要解决这个问题, 我们将连续的元化学习应用到终身对象的再识别 。 为了防止忘记先前的任务, 我们使用知识蒸馏和探索正对和负对等的作用 。 根据我们关于蒸馏和计量损失是对抗性的观察, 我们建议从蒸馏中去除正对对子, 以强化模型更新。 我们称之为无正对称的蒸馏方法( DwoPP) 是在个人和车辆再识别数据集的广泛内部实验以及LReID 基准的内部实验中进行评估的。 我们的实验证明 DwoPP 明显超越了状态艺术。 代码在这里 : https://github.com/ wangkai93118/ DwoPP_code 。