Learning from human preferences is important for language models to be helpful and useful for humans, and to align with human and social values. Existing works focus on supervised finetuning of pretrained models, based on curated model generations that are preferred by human labelers. Such works have achieved remarkable successes in understanding and following instructions (e.g., InstructGPT, ChatGPT, etc). However, to date, a key limitation of supervised finetuning is that it cannot learn from negative ratings; models are only trained on positive-rated data, which makes it data inefficient. Because collecting human feedback data is both time consuming and expensive, it is vital for the model to learn from all feedback, akin to the remarkable ability of humans to learn from diverse feedback. In this work, we propose a novel technique called Hindsight Finetuning for making language models learn from diverse human feedback. In fact, our idea is motivated by how humans learn from hindsight experience. We condition the model on a sequence of model generations paired with hindsight feedback, and finetune the model to predict the most preferred output. By doing so, models can learn to identify and correct negative attributes or errors. Applying the method to GPT-J, we observe that it significantly improves results on summarization and dialogue tasks using the same amount of human feedback.
翻译:从人类偏好中学习对于语言模型对人类有用和有用,并且与人类和社会价值观保持一致十分重要。现有工作的重点是根据人类标签者所偏爱的成熟代代代人所偏爱的成熟型模型,对预先培训的模型进行监管的微调,这些作品在理解和遵循指示方面取得了显著的成功(例如教官GPT、ChattGPT等)。然而,迄今为止,监督微调的一个关键限制是它不能从负面的评分中学习;模型仅以正面分级的数据为培训对象,这使得数据效率低下。由于收集人类反馈数据既耗时又昂贵,因此模型必须从所有反馈中学习,这与人类从不同反馈中学习的非凡能力相近。在这项工作中,我们提出了一种叫Hindsight Finning的新技术,叫做Hindsightformormation, 使语言模型从不同的人类反馈中学习。事实上,我们的想法是不能从负面的评分中学习。我们把模型的顺序与直觉的反馈结合起来,并且微调模型用来预测最受欢迎的产出。通过这样做,模型可以学会如何确定和纠正负面的评分数。我们如何评价。我们如何去。