Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on ways it can be studied empirically. We first present an experimental design centered on tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.
翻译:发展安全和有用的通用的AI系统需要我们在可扩展的监督方面取得进展:在与当前任务相关的大多数技能上,监督系统可能比我们优于我们,这个问题的经验性工作并非直截了当,因为我们还没有大大超过我们的能力。本文讨论了我们思考这一问题的主要方法之一,重点是如何从经验上研究这一问题。我们首先提出一个实验设计,以人类专家成功但未经援助的人类和当前通用AI系统失败的任务为中心。然后,我们提出一个概念验证实验,旨在展示这一实验设计的关键特征,并展示其可行性,执行两个问答任务:MMMLU和有时限的Quityality。在这些任务上,我们发现与不可靠的大语言模式对话助理通过聊天进行互动的人类参与者 -- -- 一个用于进行可扩展监督的次要基线战略 -- -- 大大优于模型本身和他们自己的未受援助的绩效。这些结果令人鼓舞地表明,可调整的监督将可与现有模型一起进行学习,并巩固其可行性,而这些大语言模型能够有效协助人类的近期发现。