Large pretrained language models generate fluent text but are notoriously hard to controllably sample from. In this work, we study constrained sampling from such language models: generating text that satisfies user-defined constraints, while maintaining fluency and the model's performance in a downstream task. We propose MuCoLa -- a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner. Specifically, it initializes the entire output sequence with noise and follows a Markov chain defined by Langevin Dynamics using the gradients of the energy function. We evaluate MuCoLa on text generation with soft and hard constraints as well as their combinations obtaining significant improvements over competitive baselines for toxicity avoidance, sentiment control, and keyword-guided generation.
翻译:大型预先培训的语言模型产生流畅的文本,但很难控制来自流畅的样本。 在这项工作中,我们研究了来自这些语言模型的有限抽样:生成满足用户定义限制的文本,同时保持流畅和模型在下游任务中的性能。我们建议使用一个取样程序,将语言模型的日志相似性与单一能源功能的任意(可区别)限制结合起来,然后以非偏向方式生成样本。具体地说,它以噪音启动整个输出序列,并遵循由Langevin Directives利用能源功能的梯度定义的Markov链。我们用软硬限制及其组合来评估文本生成的 MucoLa及其组合,在避免毒性、情绪控制和关键词生成的竞争性基线上取得了显著的改进。