Modern retrieval system often requires recomputing the representation of every piece of data in the gallery when updating to a better representation model. This process is known as backfilling and can be especially costly in the real world where the gallery often contains billions of samples. Recently, researchers have proposed the idea of Backward Compatible Training (BCT) where the new representation model can be trained with an auxiliary loss to make it backward compatible with the old representation. In this way, the new representation can be directly compared with the old representation, in principle avoiding the need for any backfilling. However, followup work shows that there is an inherent tradeoff where a backward compatible representation model cannot simultaneously maintain the performance of the new model itself. This paper reports our ``not-so-surprising'' finding that adding extra dimensions to the representation can help here. However, we also found that naively increasing the dimension of the representation did not work. To deal with this, we propose Backward-compatible Training with a novel Basis Transformation ($BT^2$). A basis transformation (BT) is basically a learnable set of parameters that applies an orthonormal transformation. Such a transformation possesses an important property whereby the original information contained in its input is retained in its output. We show in this paper how a BT can be utilized to add only the necessary amount of additional dimensions. We empirically verify the advantage of $BT^2$ over other state-of-the-art methods in a wide range of settings. We then further extend $BT^2$ to other challenging yet more practical settings, including significant change in model architecture (CNN to Transformers), modality change, and even a series of updates in the model architecture mimicking the evolution of deep learning models.
翻译:现代检索系统通常要求更新到更好的代表模式时, 重新校验画廊中的每一部分数据。 这一过程被称为回填, 在画廊本身通常包含数十亿个样本的真实世界中, 其成本可能特别高。 最近, 研究人员提出了向后兼容性培训( BCT) 的想法, 新的代表模式可以接受后向培训, 使其与旧代表模式相容。 这样, 新的代表模式可以直接与旧代表模式相比, 原则上可以避免任何回填。 但是, 后续工作表明, 在后向兼容性代表模式无法同时保持新模式本身的性能时, 这一过程会有一个内在的交换模式。 本文报告我们“ 不那么令人惊讶的” 发现, 向后向上兼容性兼容性培训模式, 使其与旧代表模式相容。 为了解决这个问题, 我们建议后向后向式培训$( B2, 2美元), 原则上避免任何回填。 然而, 基础转换( BT) 基本上是一个可以学习的参数集成一组, 在新式的模型中, 包括原始的变换式结构中, 这样的变换式, 将B 将一个重要变换数的特性变为一个新的变数 。 在原始结构中, 我们的变数中, 将其它的变数在原始的变数中, 的变数在原始的变数中只能在原始的变数中, 的变数到的变数到的变到的变到的变数中, 。