Reinforcement learning has seen wide success in finetuning large language models to better align with instructions via human feedback. The so-called algorithm, Reinforcement Learning with Human Feedback (RLHF) demonstrates impressive performance on the GPT series models. However, the underlying Reinforcement Learning (RL) algorithm is complex and requires an additional training pipeline for reward and value networks. In this paper, we consider an alternative approach: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner. Such an algorithm doesn't require any additional parameters except for the original language model and maximally reuses the pretraining pipeline. To achieve this, we formulate instruction alignment problem for language models as a goal-reaching problem in decision making. We propose Hindsight Instruction Relabeling (HIR), a novel algorithm for aligning language models with instructions. The resulting two-stage algorithm shed light to a family of reward-free approaches that utilize the hindsightly relabeled instructions based on feedback. We evaluate the performance of HIR extensively on 12 challenging BigBench reasoning tasks and show that HIR outperforms the baseline algorithms and is comparable to or even surpasses supervised finetuning.
翻译:强化学习在微调大型语言模型以更好地与人类反馈的指示保持一致方面取得了广泛的成功。所谓的算法,“加强学习与人类反馈”显示了GPT系列模型的令人印象深刻的业绩。然而,基础强化学习(RL)算法是复杂的,需要为奖励和价值网络增加一个培训管道。在本文中,我们考虑采取另一种办法:将反馈转换为教学,重新贴上原有的标签,并培训模式,以有监督的方式更好地保持一致。这种算法不需要任何额外的参数,除了原始语言模型之外,也不需要任何额外的参数,并且最大限度地再利用培训前的管道。为了做到这一点,我们为语言模型制定了指导调整问题,作为决策中一个具有目标意义的问题。我们建议Hindsight指示重新标签(HIR),这是使语言模型与指示相一致的新算法。由此形成的两阶段算法为一个家庭提供了无报酬方法的光芒,利用基于反馈的紧贴标签的指示。我们广泛评价了HIR的12项具有挑战性推理任务的业绩,并显示HIR的精细比基线算法或甚至超过监督的调整。