After just a few hundred training updates, a standard probabilistic model for language generation has likely not yet learnt many semantic or syntactic rules of natural language, which inherently makes it difficult to estimate the right probability distribution over next tokens. Yet around this point, these models have identified a simple, loss-minimising behaviour: to output the unigram distribution of the target training corpus. The use of such a crude heuristic raises the question: Rather than wasting precious compute resources and model capacity for learning this strategy at early training stages, can we initialise our models with this behaviour? Here, we show that we can effectively endow our model with a separate module that reflects unigram frequency statistics as prior knowledge. Standard neural language generation architectures offer a natural opportunity for implementing this idea: by initialising the bias term in a model's final linear layer with the log-unigram distribution. Experiments in neural machine translation demonstrate that this simple technique: (i) improves learning efficiency; (ii) achieves better overall performance; and (iii) appears to disentangle strong frequency effects, encouraging the model to specialise in non-frequency-related aspects of language.
翻译:仅经过几百次培训更新之后, 语言生成的标准概率模型可能还没有学到许多自然语言的语义或合成规则, 而这本身就使得很难估计下一个符号的正确概率分布。 然而, 围绕这一点, 这些模型确定了一种简单、 损失最小的行为: 输出目标培训资料的单字分布。 使用这种粗略的超自然论引出了问题 : 与其浪费宝贵的计算资源和在早期培训阶段学习这一战略的模型能力, 不如在早期培训阶段, 我们能用这一行为来建立我们的模型吗? 在这里, 我们表明我们可以有效地用一个单独的模块来赋予我们的模型, 将单数频率统计作为先前的知识。 标准神经语言生成结构为实施这一理念提供了一个自然的机会: 在模型的最后线性层中以对单词的分布初始化。 神经机翻译实验表明, 这种简单技术:(i) 提高了学习效率;(ii) 实现更好的总体性能;以及(iii) 似乎可以分解强烈的频率效应, 从而鼓励该模型在非频语言方面进行专门研究。