Large-scale transformer-based language models (LMs) demonstrate impressive capabilities in open text generation. However, controlling the generated text's properties such as the topic, style, and sentiment is challenging and often requires significant changes to the model architecture or retraining and fine-tuning the model on new supervised data. This paper presents a novel approach for Topical Language Generation (TLG) by combining a pre-trained LM with topic modeling information. We cast the problem using Bayesian probability formulation with topic probabilities as a prior, LM probabilities as the likelihood, and topical language generation probability as the posterior. In learning the model, we derive the topic probability distribution from the user-provided document's natural structure. Furthermore, we extend our model by introducing new parameters and functions to influence the quantity of the topical features presented in the generated text. This feature would allow us to easily control the topical properties of the generated text. Our experimental results demonstrate that our model outperforms the state-of-the-art results on coherency, diversity, and fluency while being faster in decoding.
翻译:大型变压器语言模型(LMS)在开放文本生成中表现出令人印象深刻的能力。然而,控制生成文本的特性,如主题、风格和情绪,具有挑战性,往往要求对模型结构进行重大改变,或对新监管数据模型进行再培训并进行微调。本文通过将预先培训的LM(TLG)与专题模型信息相结合,为专题语言生成提供了一种新颖的方法。我们使用巴伊西亚概率配方与专题概率相结合的问题,先前,LM概率作为可能性,专题语言生成概率作为后继者。在学习模型时,我们从用户提供的文件的自然结构中推断出该主题的概率分布。此外,我们通过引入新的参数和功能来影响生成文本中主题特征的数量,从而扩展我们的模型,从而使我们能够方便地控制生成文本的主题属性。我们的实验结果表明,我们的模型在解码速度的同时,超越了在一致性、多样性和流利方面的最新结果。