Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models. This paradigm relies on direct supervision examples, which is not applicable to many emerging applications, such as generating adversarial attacks or generating prompts to control language models. Reinforcement learning (RL) on the other hand offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward. Yet previous RL algorithms for text generation, such as policy gradient (on-policy RL) and Q-learning (off-policy RL), are often notoriously inefficient or unstable to train due to the large sequence space and the sparse reward received only at the end of sequences. In this paper, we introduce a new RL formulation for text generation from the soft Q-learning (SQL) perspective. It enables us to draw from the latest RL advances, such as path consistency learning, to combine the best of on-/off-policy updates, and learn effectively from sparse reward. We apply the approach to a wide range of text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation. Experiments show our approach consistently outperforms both task-specialized algorithms and the previous RL methods.
翻译:最大可能性估算(MLE)是培训文本生成模型的主要算法。 这一范式依赖于直接监督的例子,它不适用于许多新出现的应用,例如产生对抗性攻击或产生控制语言模型的提示。另一方面,强化学习(RL)提供了更为灵活的解决方案,允许用户插入任意任务计量作为奖励。然而,以往的文本生成(如政策梯度(根据政策RL)和Q-学习(不按政策RL))的RL算法往往臭名昭著地低效或不稳定,因为大量序列空间和只在序列结束时收到的微薄的奖赏等原因,才进行培训。在本文件中,我们从软的Q学习(SQL)角度为文本生成引入了新的RL配方。它使我们能够从最新的RL进步(例如路径一致性学习)中吸取最新的RL进步,将最佳的上/不按政策更新结合起来,并有效地从微量的奖励中学习。我们将这一方法应用于广泛的文本生成任务,包括学习噪音/负性例子、对抗性攻击和迅速生成。 实验显示我们的方法始终不折不折不折不折不折不折不折不折不扣地展示前的任务矩阵。