Large pretrained language models have shown surprising In-Context Learning (ICL) ability. With a few demonstration input-label pairs, they can predict the label for an unseen input without additional parameter updates. Despite the great success in performance, the working mechanism of ICL still remains an open problem. In order to better understand how ICL works, this paper explains language models as meta-optimizers and understands ICL as a kind of implicit finetuning. Theoretically, we figure out that the Transformer attention has a dual form of gradient descent based optimization. On top of it, we understand ICL as follows: GPT first produces meta-gradients according to the demonstration examples, and then these meta-gradients are applied to the original GPT to build an ICL model. Experimentally, we comprehensively compare the behavior of ICL and explicit finetuning based on real tasks to provide empirical evidence that supports our understanding. The results prove that ICL behaves similarly to explicit finetuning at the prediction level, the representation level, and the attention behavior level. Further, inspired by our understanding of meta-optimization, we design a momentum-based attention by analogy with the momentum-based gradient descent algorithm. Its consistently better performance over vanilla attention supports our understanding again from another aspect, and more importantly, it shows the potential to utilize our understanding for future model designing.
翻译:受过培训的大型语言模型显示了令人惊讶的文体学习能力。 有了几个演示投入标签配对, 它们可以预测隐蔽投入的标签, 而没有额外的参数更新。 尽管工作表现非常成功, ICL的工作机制仍然是个未解决的问题。 为了更好地了解ICL如何运作, 本文将语言模型解释为元乐观, 并理解ICL是一种隐含的微调。 从理论上看, 我们发现, 变压器的注意力具有一种基于梯度优化的双重形式。 我们理解ICL 如下: GPT 首先是根据演示实例生成元升级, 然后这些元升级应用到原GPT 来构建ICL 模型。 实验性地说, 我们全面比较ICL 的行为, 并根据实际任务来提供实证证据来支持我们的理解。 结果证明, ILLOC 在预测级别、 代表级别和 关注级别上也有类似的明确调整。 此外, 我们从我们对Metoption的理解出发, 我们设计了一种基于潮流的元分位模型, 我们设计了一种基于动力的清晰的理解, 来展示我们未来对动力的轨迹分析。