This position paper encourages the Human-Computer Interaction (HCI) community to focus on designing deliberative processes to inform and coordinate technology and policy design for large language models (LLMs) -- a `societal-scale technology'. First, I propose a definition for societal-scale technology and locate LLMs within this definition. Next, I argue that existing processes to ensure the safety of LLMs are insufficient and do not provide the systems with democratic legitimacy. Instead, we require processes of deliberation amongst users and other stakeholders on questions such as: what outputs are safe? and what deployment contexts are safe? This shift in AI safety research and practice will require the design of corporate and public policies that determine how to enact deliberation and the design of interfaces and technical features to translate the outcomes of deliberation into technical development processes. To conclude, I propose roles for the HCI community to ensure deliberative processes inform technology and policy design for LLMs and other societal-scale technology.
翻译:将审议民主和社会规模技术的部署联系起来
该立场文章鼓励人机交互(HCI)社区专注于设计审议过程,以便启发和协调大型语言模型(LLM) - 一种“社会规模技术”的技术和政策设计。首先,我提出了社会规模技术的定义,并将LLMs放置在该定义中。接下来,我认为目前确保LLMs安全的现有流程是不足的,并且不能为系统提供民主合法性。相反,我们需要在用户和其他利益相关者之间进行审议的过程,例如:哪些输出是安全的?哪些部署上下文是安全的?AI安全研究和实践的这种转变将需要设计企业和公共政策来确定如何实施审议,并设计界面和技术特性,以将审议结果转化为技术开发过程。最后,我提出了HCI社区的角色,以确保审议过程为LLMs和其他社会规模技术的技术和政策设计提供参考。