In recent years, pre-trained large language models have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning. However, existing literature has highlighted the sensitivity of this capability to the selection of few-shot demonstrations. The underlying mechanisms by which this capability arises from regular language model pretraining objectives remain poorly understood. In this study, we aim to examine the in-context learning phenomenon through a Bayesian lens, viewing large language models as topic models that implicitly infer task-related information from demonstrations. On this premise, we propose an algorithm for selecting optimal demonstrations from a set of annotated data and demonstrate a significant 12.5% improvement relative to the random selection baseline, averaged over eight GPT2 and GPT3 models on eight different real-world text classification datasets. Our empirical findings support our hypothesis that large language models implicitly infer a latent concept variable.
翻译:近年来,经过预先培训的大型语言模式在达到称为文字学习的推断时间短短的学习能力方面表现出了惊人的效率,然而,现有文献突出显示了这种能力对选择短片演示的敏感性。这种能力来自常规语言模式的培训前目标的基本机制仍然不甚为人理解。在本研究中,我们的目标是通过一种巴伊西亚的透镜来审查文中学习现象,将大型语言模式视为隐含从演示中推断与任务有关的信息的专题模式。在此前提下,我们提出一种算法,用于从一组附加说明的数据中选择最佳示范,并表明与随机选择基准相比有12.5%的显著改进,平均超过8个GPT2和GPT3模型,涉及8个不同的现实世界文本分类数据集。我们的经验调查结果支持我们的假设,即大语言模式隐含潜在概念变量。