The success of Transformer language models is widely credited to their dot-product attention mechanism, which interweaves a set of key design principles: mixing information across positions (enabling multi-token interactions), sequence-dependent activations (where attention weights adapt to each input), a specific mathematical form (dot-product similarities plus softmax weighting), and coupling of queries and keys to evolving hidden states (grounding attention in the current layer). However, the necessity of each of these principles remains largely untested. In this work, we systematically deconstruct attention by designing controlled variants that selectively relax these principles, applied both uniformly across all layers and in hybrid architectures where only some layers retain standard attention. Our empirical analysis reveals that mechanisms for mixing tokens are indispensable, as their absence collapses models to near-random behavior, while the exact mathematical form and sequence dependency can be substantially relaxed, especially when preserved in just a subset of layers. Surprisingly, even variants that fail in isolation can achieve robust performance when interleaved with standard attention, highlighting a cooperative effect. These findings deepen our understanding of what truly underpins attention's effectiveness and open new avenues for simplifying language models without sacrificing performance.
翻译:Transformer语言模型的成功普遍归功于其点积注意力机制,该机制融合了一系列关键设计原则:跨位置信息混合(实现多标记交互)、序列依赖激活(注意力权重随输入自适应调整)、特定数学形式(点积相似度加softmax加权)以及查询与键同演化隐藏状态的耦合(将注意力锚定在当前层)。然而,这些原则各自的必要性尚未得到充分验证。本研究通过设计受控变体系统解构注意力机制,这些变体选择性地放宽上述原则,并应用于全层统一架构及混合架构(仅部分层保留标准注意力)。实证分析表明:标记混合机制不可或缺,其缺失会导致模型退化为近随机行为;而精确数学形式与序列依赖性可大幅放宽,尤其在仅部分层保留时更为显著。令人惊讶的是,即使孤立状态下失效的变体,在与标准注意力交错组合时仍能实现稳健性能,这揭示了协同效应的存在。这些发现深化了我们对注意力机制有效性本质的理解,并为简化语言模型而不牺牲性能开辟了新路径。