AI alignment is about ensuring AI systems only pursue goals and activities that are beneficial to humans. Most of the current approach to AI alignment is to learn what humans value from their behavioural data. This paper proposes a different way of looking at the notion of alignment, namely by introducing AI Alignment Dialogues: dialogues with which users and agents try to achieve and maintain alignment via interaction. We argue that alignment dialogues have a number of advantages in comparison to data-driven approaches, especially for behaviour support agents, which aim to support users in achieving their desired future behaviours rather than their current behaviours. The advantages of alignment dialogues include allowing the users to directly convey higher-level concepts to the agent, and making the agent more transparent and trustworthy. In this paper we outline the concept and high-level structure of alignment dialogues. Moreover, we conducted a qualitative focus group user study from which we developed a model that describes how alignment dialogues affect users, and created design suggestions for AI alignment dialogues. Through this we establish foundations for AI alignment dialogues and shed light on what requires further development and research.
翻译:AI对齐是指确保AI系统只追求对人类有益的目标和活动。目前AI对齐方法大多是从行为数据中了解什么是人的价值。本文建议以不同的方式看待匹配概念,即采用AI对齐对话:用户和代理人试图通过互动实现和保持对齐的对话。我们认为,与数据驱动方法相比,对齐对话具有若干优势,特别是对于旨在支持用户实现其期望的未来行为而不是其当前行为的行为支持代理人而言。对齐对话的优势包括允许用户直接向代理人传达更高层次的概念,并使代理人更加透明和可信。我们在本文件中概述了对齐对话的概念和高级别结构。此外,我们开展了一个质量焦点小组用户研究,从中我们开发了一个模型,说明对用户的影响,并为AI对齐对话的设计建议。我们通过这个模型为AI对齐对话打下基础,并阐明哪些需要进一步发展和研究。