Learning from human preferences is important for language models to be helpful and useful for humans, and to align with human and social values. Prior work have achieved remarkable successes by learning from human feedback to understand and follow instructions. Nonetheless, these methods are either founded on hand-picked model generations that are favored by human annotators, rendering them ineffective in terms of data utilization and challenging to apply in general, or they depend on reward functions and reinforcement learning, which are prone to imperfect reward function and extremely challenging to optimize. In this work, we propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity. Our idea is inspired by how humans learn from extensive feedback presented in the form of languages. We convert all types of feedback into sentences, which are then used to fine-tune the model, allowing us to take advantage of the language comprehension capabilities of language models. We condition the model on a sequence of model generations paired with feedback. By doing so, models are trained to generate outputs based on feedback, and models can learn to identify and correct negative attributes or errors. Applying our method to large language models, we observed that Chain of Hindsight significantly surpasses previous methods in aligning language models with human preferences. We observed significant improvements on summarization and dialogue tasks and our approach is markedly preferred in human evaluations.
翻译:从人类偏好中学习人类偏好对于语言模型对人类有用和有用,并与人类和社会价值观保持一致十分重要。以前的工作已经取得了显著的成功,从人类的反馈中学习了解和遵循指示。然而,这些方法要么是建立在由人类指导者所偏爱的亲手挑选的模范代代人的基础上,在数据利用方面没有实效,一般地难以应用,要么是依靠奖励功能和强化学习,这容易产生不完善的奖赏功能,并且极难优化。在这项工作中,我们提出一种创新技术,即 " 闪光环链 ",这种技术很容易优化,并且可以从任何形式的反馈中学习,而不论其极分性如何。我们的思想灵感来自人类如何从以语言形式提供的广泛的反馈中学习。我们把所有类型的反馈转换成句子,然后用来微调模型的语文理解能力。我们把模型的模型以模型的顺序作为最佳条件,同时进行反馈。通过这种方式,对模型进行培训,以便产生产出,而模型可以学习找出和纠正负面的属性或错误,而不管其是何为极极性。我们用什么样的对话方式调整了我们所观察到的大语言的顺序。我们所观察到的模型。</s>