Visual retrieval system faces frequent model update and deployment. It is a heavy workload to re-extract features of the whole database every time.Feature compatibility enables the learned new visual features to be directly compared with the old features stored in the database. In this way, when updating the deployed model, we can bypass the inflexible and time-consuming feature re-extraction process. However, the old feature space that needs to be compatible is not ideal and faces the distribution discrepancy problem with the new space caused by different supervision losses. In this work, we propose a global optimization Dual-Tuning method to obtain feature compatibility against different networks and losses. A feature-level prototype loss is proposed to explicitly align two types of embedding features, by transferring global prototype information. Furthermore, we design a component-level mutual structural regularization to implicitly optimize the feature intrinsic structure. Experimental results on million-scale datasets demonstrate that our Dual-Tuning is able to obtain feature compatibility without sacrificing performance. (Our code will be avaliable at https://github.com/yanbai1993/Dual-Tuning)
翻译:视觉检索系统面临频繁的模型更新和部署。 每次要重新提取整个数据库的特征,这是一个繁重的工作量。 性质兼容性使得所学的新视觉特征能够直接与数据库储存的旧特征进行比较。 这样,在更新部署的模型时,我们可以绕过不灵活和耗时的功能再解析过程。 但是,需要兼容的旧特征空间并不理想,并且面临着不同监督损失造成的新空间的分布差异问题。 在这项工作中,我们提议了一种全球优化双试运行方法,以获得不同网络和损失的特征兼容性。 提议了一种特性级原型损失,通过转移全球原型信息,明确调整两种类型的嵌入特征。 此外,我们设计了一个构件级的相互结构规范,以隐含优化特性的内在结构。 百万尺度数据集的实验结果表明,我们的双尺寸数据集能够在不牺牲性能的情况下获得特性兼容性。 (我们的代码将在https://github.com/yanbai/Dual-Tuning上得到验证)。