Large transformer-based language models are able to perform few-shot learning (also known as in-context learning), without having been explicitly trained for it. We hypothesized that specific distributional properties of natural language might drive this emergent phenomenon, as these characteristics might lead to a kind of interpolation between few-shot meta-training (designed to elicit rapid few-shot learning) and standard supervised training (designed to elicit gradual in-weights learning). We also hypothesized that these distributional properties could lead to emergent few-shot learning in domains outside of language. Inspired by this idea, we ran a series of experiments on a standard image-based few-shot dataset. We discovered that a number of data properties did indeed promote the emergence of few-shot learning in transformer models. All of these properties are present in natural language -- burstiness, long-tailedness, and many-to-one or one-to-many label mappings. The data influenced whether models were biased towards either few-shot learning vs. memorizing information in their weights; models could generally perform well at only one or the other. However, we discovered that an additional distributional property could allow the two capabilities to co-exist in the same model -- a skewed, Zipfian distribution over classes -- which occurs in language as well. Notably, training data that could elicit few-shot learning in transformers were unable to elicit few-shot learning in recurrent models. In sum, we find that few-shot learning emerges only from applying the right architecture to the right data distribution; neither component is sufficient on its own.
翻译:大型变压器语言模型能够在没有经过明确培训的情况下进行几发学习(也称为文文本学习),而没有为此进行明确培训。我们假设自然语言的具体分布特性可能会驱动这种突发现象,因为这些特性可能导致几发元培训(旨在迅速进行少发学习)和标准监督培训(旨在逐步进行体重学习)之间的某种内插。我们还假设这些分布属性可能导致在语言以外领域出现少发学习(也称为文文本学习)。受这一想法的启发,我们在一个基于图像的经常数数据集上进行了一系列实验。我们发现,一些数据特性确实能够促进变压模型中少发的学习。所有这些特性都存在于自然语言中 -- -- 暴躁、长尾尾尾、多对一或一对一的标签绘图中。数据属性影响模型是否偏向于微发数发的学习,在重量中发现信息,模型一般只能在一个或两个无法完成的变现的模型中进行一次或两次的计算。然而,我们发现,在学习阶段中,数据分布会让两个不同的数据序列在学习能力中进行更多的学习。