Large pretrained language models have shown surprising In-Context Learning (ICL) ability. With a few demonstration input-label pairs, they can predict the label for an unseen input without additional parameter updates. Despite the great success in performance, the working mechanism of ICL still remains an open problem. In order to better understand how ICL works, this paper explains language models as meta optimizers and understands ICL as a kind of implicit finetuning. Theoretically, we figure out that the Transformer attention has a dual form of gradient descent based optimization. On top of it, we understand ICL as follows: GPT first produces meta gradients according to the demonstration examples, and then these meta gradients are applied to the original GPT to build an ICL model. Experimentally, we comprehensively compare the behavior of ICL and explicit finetuning based on real tasks to provide empirical evidence that supports our understanding. The results prove that ICL behaves similarly to explicit finetuning at the prediction level, the representation level, and the attention behavior level. Further, inspired by our understanding of meta optimization, we design a momentum-based attention by analogy with the momentum-based gradient descent algorithm. Its consistently better performance over vanilla attention supports our understanding again from another aspect, and more importantly, it shows the potential to utilize our understanding for future model designing.
翻译:大型经过培训的语言模型显示了令人惊讶的文体学习能力。 有了几个演示投入标签配对, 它们可以预测隐蔽投入的标签, 而没有额外的参数更新。 尽管工作表现非常成功, ICL的工作机制仍是一个未解决的问题。 为了更好地了解ICL如何运作, 本文将语言模型解释为元优化, 并理解ICL是一种隐含的微调。 从理论上看, 我们发现变压器的注意力具有基于梯度的优化的双重形式。 我们理解ICL如下: GPT首先根据演示示例生成元梯度, 然后这些元梯度应用到原GPT 来构建ICL模型。 实验性地, 我们全面比较ICL的行为, 并根据实际任务进行明确的微调, 以支持我们的理解。 结果证明, ILLL类似于在预测级别、 代表级别和 关注行为水平上的明确微调。 此外, 根据我们对元化优化的理解, 我们设计了一种基于动力的注意, 与基于动力梯度的梯度模型相比, 我们设计一种基于动力的注意, 来更清楚地了解未来。