This paper presents a novel method for embedding transfer, a task of transferring knowledge of a learned embedding model to another. Our method exploits pairwise similarities between samples in the source embedding space as the knowledge, and transfers them through a loss used for learning target embedding models. To this end, we design a new loss called relaxed contrastive loss, which employs the pairwise similarities as relaxed labels for inter-sample relations. Our loss provides a rich supervisory signal beyond class equivalence, enables more important pairs to contribute more to training, and imposes no restriction on manifolds of target embedding spaces. Experiments on metric learning benchmarks demonstrate that our method largely improves performance, or reduces sizes and output dimensions of target models effectively. We further show that it can be also used to enhance quality of self-supervised representation and performance of classification models. In all the experiments, our method clearly outperforms existing embedding transfer techniques.
翻译:本文介绍了一种新颖的嵌入传输方法,这是将知识化嵌入模型的知识传授给另一个模式的任务。 我们的方法利用了在源中将空间嵌入为知识的样本之间的对等相似性,并通过用于学习嵌入目标模型的损失转移它们。 为此,我们设计了一种新的损失,称为放松对比损失,将对等性相似性用作不同样本关系的宽松标签。我们的损失提供了超越等级等同的丰富的监督信号,使更重要的对等能够对培训做出更大贡献,并对目标嵌入空间的方块没有限制。 有关指标学习基准的实验表明,我们的方法在很大程度上提高了性能,或者有效地减少了目标模型的规模和产出层面。我们进一步表明,它也可以用来提高自我监督的表述质量和分类模型的性能。在所有实验中,我们的方法显然超过了现有的嵌入式传输技术。