Learning from human preferences is important for language models to be helpful and useful for humans, and to align with human and social values. Prior work have achieved remarkable successes by learning from human feedback to understand and follow instructions. Nonetheless, these methods are either founded on hand-picked model generations that are favored by human annotators, rendering them ineffective in terms of data utilization and challenging to apply in general, or they depend on reward functions and reinforcement learning, which are prone to imperfect reward function and extremely challenging to optimize. In this work, we propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity. Our idea is inspired by how humans learn from extensive feedback presented in the form of languages. We convert all types of feedback into sentences, which are then used to fine-tune the model, allowing us to take advantage of the language comprehension capabilities of language models. We condition the model on a sequence of model generations paired with feedback. By doing so, models are trained to generate outputs based on feedback, and models can learn to identify and correct negative attributes or errors. Applying our method to large language models, we observed that Chain of Hindsight significantly surpasses previous methods in aligning language models with human preferences. We observed significant improvements on summarization and dialogue tasks and our approach is markedly preferred in human evaluations.
翻译:从人类偏好中学习对于语言模型来说十分重要,这有助于使其对人类有帮助和实用,并对人和社会价值观进行调整。之前的研究通过学习从人类反馈中理解和遵循指令取得了显著的成功。然而,这些方法要么是建立在人类注释者喜爱的手动选择型的模型基础之上,这使得它们在数据利用方面效果不佳,且很难在一般情况下应用;要么是依赖奖励函数和强化学习,这容易受到不完善的奖励函数的影响,极难优化。
在本文中,我们提出了一种新颖的技术,Chain of Hindsight,它易于优化,并可以从任何形式的反馈中学习,无论其极性如何。我们的想法受到人类如何从形式为语言的广泛反馈中学习的启示。我们将所有类型的反馈都转换为句子,然后用于微调模型,这使我们可以利用语言模型的语言理解能力。我们将模型的生成序列与反馈相配对,从而使模型根据反馈生成输出,并学会识别和纠正负面特征或错误。将我们的方法应用于大型语言模型后,我们观察到Chain of Hindsight显著超越了以前的方法,可以使语言模型与人类偏好相一致。我们在总结和对话任务上观察到了显著的改进,并且在人类评估中我们的方法明显优于其他方法。