Real-world applications of neural language models often involve running many different models over the same corpus. The high computational cost of these runs has led to interest in techniques that can reuse the contextualized embeddings produced in previous runs to speed training and inference of future ones. We refer to this approach as embedding recycling (ER). While multiple ER techniques have been proposed, their practical effectiveness is still unknown because existing evaluations consider very few models and do not adequately account for overhead costs. We perform an extensive evaluation of ER across eight different models (17 to 900 million parameters) and fourteen tasks in English. We show how a simple ER technique that caches activations from an intermediate layer of a pretrained model, and learns task-specific adapters on the later layers, is broadly effective. For the best-performing baseline in our experiments (DeBERTa-v2 XL), adding a precomputed cache results in a >90% speedup during training and 87-91% speedup for inference, with negligible impact on accuracy. Our analysis reveals important areas of future work.
翻译:神经语言模型在现实世界的应用往往涉及对同一体进行许多不同模型的运行。 这些运行的高计算成本使人们对能够重新利用先前运行过程中产生的环境化嵌入器的技术产生兴趣,以加快培训和推断未来的嵌入器。 我们将此方法称为嵌入再循环(ER ) 。 虽然提出了多种ER 技术,但其实际有效性仍然未知,因为现有的评价考虑的模型很少,而且没有适当计算间接费用。 我们对八个不同的模型(17亿至9亿参数)和14项英语任务进行了广泛的ER 。我们展示了一种简单的ER 技术如何广泛有效,这种技术能够从一个预设模型的中间层中存储启动,并在后层学习特定任务适应器。对于我们实验中的最佳基准(DeBERTA-V2 XL)来说,在培训过程中增加了90%的速率和87-91%的误判速率,对准确性影响微乎其微。我们的分析揭示了未来工作的重要领域。