The task of privacy-preserving model upgrades in image retrieval desires to reap the benefits of rapidly evolving new models without accessing the raw gallery images. A pioneering work introduced backward-compatible training, where the new model can be directly deployed in a backfill-free manner, i.e., the new query can be directly compared to the old gallery features. Despite a possible solution, its improvement in sequential model upgrades is gradually limited by the fixed and under-quality old gallery embeddings. To this end, we propose a new model upgrade paradigm, termed Bidirectional Compatible Training (BiCT), which will upgrade the old gallery embeddings by forward-compatible training towards the embedding space of the backward-compatible new model. We conduct comprehensive experiments to verify the prominent improvement by BiCT and interestingly observe that the inconspicuous loss weight of backward compatibility actually plays an essential role for both backward and forward retrieval performance. To summarize, we introduce a new and valuable problem named privacy-preserving model upgrades, with a proper solution BiCT. Several intriguing insights are further proposed to get the most out of our method.
翻译:在图像检索方面,保护隐私的模型升级任务希望从迅速发展的新模型中受益,而没有获得原始的画廊图像。 一项开创性的工作引入了后向兼容培训,新模型可以直接以不填充方式直接部署,即新查询可以直接与旧的画廊特征相比较。尽管可能找到解决办法,但其连续模型升级的改进逐渐受到固定和低质量旧画廊嵌入的制约。为此,我们提出了一个新的模型升级模式,称为双向兼容培训(BiCT),它将通过前向兼容培训升级旧画廊嵌入后向后向兼容新模型的嵌入空间。我们进行了全面实验,以核实BiCT的显著改进,并令人感兴趣的是,后向兼容的不显眼损失重量实际上对后向和前向检索性表现都起着关键作用。简而言,我们提出了一个名为隐私保护模型升级的新而有价值的问题,并提出了适当的解决方案BiCT。我们进一步提出了一些令人感兴趣的见解,以摆脱我们的方法。