Recent alignment techniques, such as reinforcement learning from human feedback, have been widely adopted to align large language models with human preferences by learning and leveraging reward models. In practice, these models often exploit spurious correlations, involving, e.g., response length, discrimination, sycophancy, and conceptual bias, which is a problem that has received increasing attention. In this work, we propose a principled framework that mitigates these biases in reward models while preserving the underlying factors that reflect intended preferences. We first provide a formulation of the data-generating process, assuming that the observed data (e.g., text) is generated from both spurious and non-spurious latent variables. We show that, interestingly, these non-spurious latent variables can be theoretically identified from data, regardless of whether a surrogate for the spurious latent variables is available. This further inspires a practical method that uses variational inference to recover these variables and leverages them to train reward models. Experiments on synthetic and real-world datasets demonstrate that our method effectively mitigates spurious correlation issues and yields more robust reward models.
翻译:近年来,基于人类反馈的强化学习等对齐技术已被广泛用于通过学习和利用奖励模型,使大语言模型与人类偏好保持一致。在实践中,这些模型常常利用虚假相关性,例如涉及响应长度、歧视、谄媚和概念偏差等问题,这已引起越来越多的关注。在本研究中,我们提出了一个原则性框架,旨在减轻奖励模型中的这些偏差,同时保留反映预期偏好的潜在因素。我们首先对数据生成过程进行了形式化描述,假设观测数据(例如文本)是由虚假和非虚假的潜在变量共同生成的。有趣的是,我们证明这些非虚假潜在变量理论上可以从数据中识别,无论是否可获得虚假潜在变量的替代指标。这进一步启发了一种实用方法,该方法利用变分推断来恢复这些变量,并利用它们训练奖励模型。在合成和真实数据集上的实验表明,我们的方法有效缓解了虚假相关性问题,并产生了更稳健的奖励模型。