Artificial agents have traditionally been trained to maximize reward, which may incentivize power-seeking and deception, analogous to how next-token prediction in language models (LMs) may incentivize toxicity. So do agents naturally learn to be Machiavellian? And how do we measure these behaviors in general-purpose models such as GPT-4? Towards answering these questions, we introduce MACHIAVELLI, a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios that center on social decision-making. Scenario labeling is automated with LMs, which are more performant than human annotators. We mathematize dozens of harmful behaviors and use our annotations to evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations. We observe some tension between maximizing reward and behaving ethically. To improve this trade-off, we investigate LM-based methods to steer agents' towards less harmful behaviors. Our results show that agents can both act competently and morally, so concrete progress can currently be made in machine ethics--designing agents that are Pareto improvements in both safety and capabilities.
翻译:传统上,人工智能代理的培训方法是最大化奖励,这可能会激励追求权力和欺骗,类似于语言模型中的下一个令牌预测可能会激励毒性。那么代理是否自然而然地学会了马基雅维利主义?我们如何在通用模型(如GPT-4)中衡量这些行为?为了回答这些问题,我们引入了MACHIAVELLI,一个包含超过50万个围绕社会决策制定的丰富多样情境的134款冒险文艺游戏的基准测试。情境标注由比人类注释者更高性能的语言模型自动执行。我们数学化了数十种有害行为,并使用我们的注释来评估代理对于寻求权力和造成不适、违反道德的倾向。我们观察到,在最大化奖励和行使道德之间存在一定的紧张关系。为了改善这种权衡,我们研究了基于语言模型的方法,以引导代理朝向更少有害的行为方向。我们的研究结果表明,代理可以既能表现得有能力又能表现得道德,因此我们可以在机器伦理学方面取得实质性的进展,即设计出既安全又具备能力的Pareto改进型代理。