We tackle the problem of aligning pre-trained large language models (LMs) with human preferences. If we view text generation as a sequential decision-making problem, reinforcement learning (RL) appears to be a natural conceptual framework. However, using RL for LM-based generation faces empirical challenges, including training instability due to the combinatorial action space, as well as a lack of open-source libraries and benchmarks customized for LM alignment. Thus, a question rises in the research community: is RL a practical paradigm for NLP? To help answer this, we first introduce an open-source modular library, RL4LMs (Reinforcement Learning for Language Models), for optimizing language generators with RL. The library consists of on-policy RL algorithms that can be used to train any encoder or encoder-decoder LM in the HuggingFace library (Wolf et al. 2020) with an arbitrary reward function. Next, we present the GRUE (General Reinforced-language Understanding Evaluation) benchmark, a set of 6 language generation tasks which are supervised not by target strings, but by reward functions which capture automated measures of human preference.GRUE is the first leaderboard-style evaluation of RL algorithms for NLP tasks. Finally, we introduce an easy-to-use, performant RL algorithm, NLPO (Natural Language Policy Optimization)} that learns to effectively reduce the combinatorial action space in language generation. We show 1) that RL techniques are generally better than supervised methods at aligning LMs to human preferences; and 2) that NLPO exhibits greater stability and performance than previous policy gradient methods (e.g., PPO (Schulman et al. 2017)), based on both automatic and human evaluation.
翻译:我们解决了将受过训练的大型语言模型(LMS)与人类偏好相匹配的问题。 如果我们将文本生成视为一个顺序决策问题, 强化学习(RL)似乎是一个自然的概念框架。 但是, 使用 RL(LM(LM) 生成) 将面临经验挑战, 包括由于组合动作空间而导致的培训不稳定, 以及缺乏为LM(LM) 匹配定制的开放源码库和基准。 因此, 研究界出现一个问题: RL( RL) 是 NLP的实用范例吗? 为了帮助回答这个问题, 我们首先推出一个开放源模块库, RL4LM(语言模型的强化学习) (RLS), 用于优化 RLL(RPO) 语言生成工具的优化。 图书馆由在线 RL 算法构成的在线算法, 用于在Hugging Face(Wolf 和 al. 2020 ) 上任意奖赏功能。 下面, 我们展示 GRL(GL) 的学习(GL) 和GL(G) 总体强化语言评估(GL) 工具比人类更高级) 工具更有利于, 我们的动作评估。