Achieving backward compatibility when rolling out new models can highly reduce costs or even bypass feature re-encoding of existing gallery images for in-production visual retrieval systems. Previous related works usually leverage losses used in knowledge distillation which can cause performance degradations or not guarantee compatibility. To address these issues, we propose a general framework called Learning Compatible Embeddings (LCE) which is applicable for both cross model compatibility and compatible training in direct/forward/backward manners. Our compatibility is achieved by aligning class centers between models directly or via a transformation, and restricting more compact intra-class distributions for the new model. Experiments are conducted in extensive scenarios such as changes of training dataset, loss functions, network architectures as well as feature dimensions, and demonstrate that LCE efficiently enables model compatibility with marginal sacrifices of accuracies. The code will be available at https://github.com/IrvingMeng/LCE.
翻译:在推出新模型时实现后向兼容性,可以大大降低成本,甚至绕过功能,为生产中的视觉检索系统重新编码现有画廊图像。以前的有关工作通常会利用知识蒸馏过程中的损失,造成性能退化或不能保证兼容性。为了解决这些问题,我们提议了一个称为“学习兼容嵌入式(LCE)”的一般框架,既适用于跨模型兼容性,也适用于直接/转向/后向方式的兼容性培训。我们的兼容性是通过直接或通过转换将各模型之间的类中心对齐,并限制新模型的较紧凑的阶级内部分布来实现的。实验是在广泛的情景下进行的,例如培训数据集、损失功能、网络结构以及特征层面的变化,并表明LCE能够有效地使模型与边际牺牲的兼容性。该代码将在https://github.com/IrvingMeng/LCE上查阅。