Autoregressive Language Models (LLMs) trained on Next-Token Prediction (NTP) often suffer from Topic Drift where the generation wanders away from the initial prompt due to a reliance on local associations rather than global planning. While scaling model size mitigates this, the fundamental myopia of the NTP objective remains. In this work, we introduce the Idea-Gated Transformer, a novel architecture that separates semantic planning from syntactic generation. We introduce an auxiliary Idea Head trained to predict the bag-of-words distribution for a future context window, creating a latent ``Concept Vector'' that actively gates the main vocabulary during generation. We propose a differentiable gating mechanism that suppresses semantically irrelevant tokens, effectively pruning the search space in real-time. Experiments on WikiText-103 demonstrate that while the Idea-Gated model achieves comparable validation perplexity to a standard GPT-2 baseline, it exhibits significantly superior Domain Retention. Qualitative and quantitative analysis reveals that the gating mechanism successfully locks generation into specific semantic clusters (e.g., Finance, Science) and resists associative drift, offering a parameter-efficient path toward more controllable language modeling.
翻译:基于下一词预测(NTP)训练的自回归语言模型(LLMs)常受主题漂移问题困扰,即生成内容因依赖局部关联而非全局规划而偏离初始提示。尽管增大模型规模可缓解此问题,但NTP目标固有的短视性依然存在。本研究提出Idea-Gated Transformer,一种将语义规划与句法生成分离的新型架构。我们引入辅助性的Idea Head,训练其预测未来上下文窗口的词袋分布,从而生成活跃调控主词汇表的潜在“概念向量”。通过可微分门控机制抑制语义无关词元,实现实时搜索空间剪枝。在WikiText-103上的实验表明:Idea-Gated模型在验证困惑度上与标准GPT-2基线相当,但展现出显著优越的领域保持能力。定性与定量分析证实,该门控机制能成功将生成内容锁定于特定语义簇(如金融、科学),并有效抵抗关联漂移,为构建更可控的语言模型提供了参数高效的路径。