Contrastive learning has been applied successfully to learn vector representations of text. Previous research demonstrated that learning high-quality representations benefits from batch-wise contrastive loss with a large number of negatives. In practice, the technique of in-batch negative is used, where for each example in a batch, other batch examples' positives will be taken as its negatives, avoiding encoding extra negatives. This, however, still conditions each example's loss on all batch examples and requires fitting the entire large batch into GPU memory. This paper introduces a gradient caching technique that decouples backpropagation between contrastive loss and the encoder, removing encoder backward pass data dependency along the batch dimension. As a result, gradients can be computed for one subset of the batch at a time, leading to almost constant memory usage.
翻译:成功应用了反向学习来学习文字的矢量表达方式。 以前的研究表明,学习高质量的表述方式得益于分批法的对比性损失和大量负值。 实际上,使用批量负值技术,对每批中每个例子使用批次负值技术,其他批次实例的正数将被当作其负值,避免编码额外的负值。然而,这仍然使每个例在所有批次例子中都存在损失,需要将整批大批次的体积与GPU内存相匹配。 本文引入了一种梯度缓冲技术,在对比性损失和编码器之间进行反向调整,在批次中消除编码器后传数据依赖性。 因此,可以一次计算批次中某一组次的梯度,导致几乎连续的内存使用。