To align conditional text generation model outputs with desired behaviors, there has been an increasing focus on training the model using reinforcement learning (RL) with reward functions learned from human annotations. Under this framework, we identify three common cases where high rewards are incorrectly assigned to undesirable patterns: noise-induced spurious correlation, naturally occurring spurious correlation, and covariate shift. We show that even though learned metrics achieve high performance on the distribution of the data used to train the reward function, the undesirable patterns may be amplified during RL training of the text generation model. While there has been discussion about reward gaming in the RL or safety community, in this short discussion piece, we would like to highlight reward gaming in the NLG community using concrete conditional text generation examples and discuss potential fixes and areas for future work.
翻译:为了使有条件的文本生成模型产出与理想行为保持一致,人们越来越重视利用从人文说明中学习的奖励功能来培训该模型。在这个框架内,我们查明了高回报被错误地分配给不受欢迎的模式的三个常见案例:噪音引起的虚假关联、自然产生的虚假关联以及共变的转变。我们表明,尽管所学的衡量尺度在用于培训奖赏功能的数据分配上取得了很高的绩效,但在学习的衡量尺度在用于培训奖赏功能的数据生成模型的培训中可能会扩大不良模式。虽然在RL或安全界讨论了奖励赌博的问题,但在这个简短的讨论文章中,我们要强调在NLG社区使用具体有条件的文本生成示例进行奖赏游戏,并讨论未来工作的可能办法和领域。