Despite the success of autoregressive large language models in text generation, it remains a major challenge to generate text that satisfies complex constraints: sampling from the conditional distribution $\Pr(\text{text} | \alpha)$ is intractable for even the simplest lexical constraints $\alpha$. To overcome this challenge, we propose to use tractable probabilistic models to impose lexical constraints in autoregressive text generation, which we refer to as GeLaTo. To demonstrate the effectiveness of this framework, we use distilled hidden Markov models to control autoregressive generation from GPT2. GeLaTo achieves state-of-the-art performance on CommonGen, a challenging benchmark for constrained text generation, beating a wide range of strong baselines by a large margin. Our work not only opens up new avenues for controlling large language models but also motivates the development of more expressive tractable probabilistic models.
翻译:尽管自回归大语言模型在文本生成方面取得了成功,但生成满足复杂约束条件的文本仍然是一个主要挑战:对于甚至最简单的词汇约束α,从条件分布$\Pr(\text{text} | \alpha)$中进行采样是不可计算的。为了克服这一挑战,我们提出使用可处理的概率模型来在自回归文本生成中强制实施词汇约束,我们称之为GeLaTo。为了证明这个框架的有效性,我们使用蒸馏的隐马尔科夫模型来控制从GPT2中的自回归生成。GeLaTo在CommonGen上实现了最先进的性能,这是一个具有挑战性的约束文本生成基准测试,击败了许多强大的基准测试。我们的工作不仅开辟了控制大语言模型的新途径,还鼓励发展更具有表现力的可处理的概率模型。