Since the advent of ChatGPT, Large Language Models (LLMs) have excelled in various tasks but remain as black-box systems. Understanding the reasoning bottlenecks of LLMs has become a critical challenge, as these limitations are deeply tied to their internal architecture. Among these, attention heads have emerged as a focal point for investigating the underlying mechanics of LLMs. In this survey, we aim to demystify the internal reasoning processes of LLMs by systematically exploring the roles and mechanisms of attention heads. We first introduce a novel four-stage framework inspired by the human thought process: Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation. Using this framework, we comprehensively review existing research to identify and categorize the functions of specific attention heads. Additionally, we analyze the experimental methodologies used to discover these special heads, dividing them into two categories: Modeling-Free and Modeling-Required methods. We further summarize relevant evaluation methods and benchmarks. Finally, we discuss the limitations of current research and propose several potential future directions.
翻译:暂无翻译