In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much can hinder ground truth performance, in accordance with Goodhart's law. This effect has been frequently observed, but not carefully measured due to the expense of collecting human preference data. In this work, we use a synthetic setup in which a fixed "gold-standard" reward model plays the role of humans, providing labels used to train a proxy reward model. We study how the gold reward model score changes as we optimize against the proxy reward model using either reinforcement learning or best-of-$n$ sampling. We find that this relationship follows a different functional form depending on the method of optimization, and that in both cases its coefficients scale smoothly with the number of reward model parameters. We also study the effect on this relationship of the size of the reward model dataset, the number of reward model and policy parameters, and the coefficient of the KL penalty added to the reward in the reinforcement learning setup. We explore the implications of these empirical results for theoretical considerations in AI alignment.
翻译:在从人类反馈中强化学习时,通常会优化与为预测人类偏好而培训的奖赏模式相比的奖赏模式。由于奖赏模式不完美,根据古德哈特法,优化其价值过多会妨碍地面实绩。这种效果经常被观察到,但因收集人类偏好数据的费用而没有仔细衡量。在这项工作中,我们使用一个合成结构,固定的“黄金标准”奖赏模式发挥人类的作用,提供用于培训代理奖赏模式的标签。我们研究金赏金模式如何在利用强化学习或最佳一美元抽样来优化替代奖赏模式时得分的变化。我们发现,这种关系的功能形式不同,取决于优化方法,在这两种情况下,其系数与奖赏模式参数数量相适应。我们还研究奖赏模型数据集大小、奖赏模式和政策参数的数量以及加在强化学习组合中奖赏的KL惩罚系数等关系的影响。我们探索这些经验结果对AI调整理论考虑的影响。