Autoregressive Language Models (LLMs) trained on Next-Token Prediction (NTP) often suffer from ``Topic Drift'' where the generation wanders away from the initial prompt due to a reliance on local associations rather than global planning \citep{holtzman2019curious}. While scaling model size mitigates this \citep{brown2020language}, the fundamental myopia of the NTP objective remains. In this work, we introduce the Idea-Gated Transformer, a novel architecture that separates semantic planning from syntactic generation. We introduce an auxiliary ``Idea Head'' trained to predict the bag-of-words distribution for a future context window, creating a latent ``Concept Vector'' that actively gates the main vocabulary during generation. We propose a differentiable gating mechanism that suppresses semantically irrelevant tokens, effectively pruning the search space in real-time. Experiments on WikiText-103 demonstrate that while the Idea-Gated model achieves comparable validation perplexity to a standard GPT-2 baseline, it exhibits significantly superior Domain Retention. Qualitative and quantitative analysis reveals that the gating mechanism successfully locks generation into specific semantic clusters (e.g., Finance, Science) and resists associative drift, offering a parameter-efficient path toward more controllable language modeling.
翻译:基于下一词预测(NTP)训练的自回归语言模型(LLM)常受“主题漂移”问题困扰,即生成内容因依赖局部关联而非全局规划而偏离初始提示\\citep{holtzman2019curious}。虽然扩大模型规模可缓解此问题\\citep{brown2020language},但NTP目标固有的短视性依然存在。本研究提出概念门控Transformer,这是一种将语义规划与句法生成分离的新型架构。我们引入辅助的“概念头”,训练其预测未来上下文窗口的词袋分布,从而生成潜在“概念向量”,在生成过程中主动对主词汇表进行门控。我们提出一种可微分门控机制,抑制语义无关的词元,实现对搜索空间的实时剪枝。在WikiText-103上的实验表明,概念门控模型在验证困惑度上与标准GPT-2基线相当的同时,展现出显著更优的领域保持能力。定性与定量分析显示,该门控机制成功将生成内容锁定于特定语义簇(如金融、科学),并有效抵抗关联性漂移,为更可控的语言建模提供了参数高效的路径。