Repetition curse is a phenomenon where Large Language Models (LLMs) generate repetitive sequences of tokens or cyclic sequences. While the repetition curse has been widely observed, its underlying mechanisms remain poorly understood. In this work, we investigate the role of induction heads--a specific type of attention head known for their ability to perform in-context learning--in driving this repetitive behavior. Specifically, we focus on the "toxicity" of induction heads, which we define as their tendency to dominate the model's output logits during repetition, effectively excluding other attention heads from contributing to the generation process. Our findings have important implications for the design and training of LLMs. By identifying induction heads as a key driver of the repetition curse, we provide a mechanistic explanation for this phenomenon and suggest potential avenues for mitigation. We also propose a technique with attention head regularization that could be employed to reduce the dominance of induction heads during generation, thereby promoting more diverse and coherent outputs.
翻译:重复诅咒是指大型语言模型(LLMs)生成重复或循环序列的现象。尽管重复诅咒已被广泛观察到,但其内在机制仍知之甚少。在本研究中,我们探究了归纳头——一种已知能够执行上下文学习的特定注意力头类型——在驱动这种重复行为中的作用。具体而言,我们聚焦于归纳头的“毒性”,即其在重复过程中倾向于主导模型输出逻辑,从而有效排除其他注意力头参与生成过程的趋势。我们的发现对LLMs的设计与训练具有重要意义。通过识别归纳头作为重复诅咒的关键驱动因素,我们为这一现象提供了机制性解释,并提出了潜在的缓解途径。我们还提出了一种结合注意力头正则化的技术,可用于降低生成过程中归纳头的主导性,从而促进更多样化和连贯的输出。