Despite the success of autoregressive large language models in text generation, it remains a major challenge to generate text that satisfies complex constraints: sampling from the conditional distribution $\Pr(\text{text} | \alpha)$ is intractable for even the simplest lexical constraints $\alpha$. To overcome this challenge, we propose to use tractable probabilistic models to impose lexical constraints in autoregressive text generation, which we refer to as GeLaTo. To demonstrate the effectiveness of this framework, we use distilled hidden Markov models to control autoregressive generation from GPT2. GeLaTo achieves state-of-the-art performance on CommonGen, a challenging benchmark for constrained text generation, beating a wide range of strong baselines by a large margin. Our work not only opens up new avenues for controlling large language models but also motivates the development of more expressive tractable probabilistic models.
翻译:尽管自回归大语言模型在文本生成方面取得了成功,但生成满足复杂约束的文本仍然是一个巨大的挑战:即使是最简单的词汇约束α,从条件分布$\Pr(\text{text} | \alpha)$中采样也是不可处理的。为了克服这一挑战,我们提出使用可处理的概率模型在自回归文本生成中实施词汇约束,我们将其称为GeLaTo。为了证明这个框架的有效性,我们使用蒸馏的隐马尔可夫模型来控制从GPT2生成的自回归。GeLaTo在CommonGen上实现了最先进的性能,CommonGen是一个具有挑战性的受约束文本生成基准,与一系列强基线相比,显著提高了效果。我们的工作不仅为控制大的语言模型开辟了新的途径,也促进了更富表现力的可处理概率模型的发展。