This position paper encourages the Human-Computer Interaction (HCI) community to focus on designing deliberative processes to inform and coordinate technology and policy design for large language models (LLMs) -- a `societal-scale technology'. First, I propose a definition for societal-scale technology and locate LLMs within this definition. Next, I argue that existing processes to ensure the safety of LLMs are insufficient and do not give the systems democratic legitimacy. Instead, we require processes of deliberation amongst users and other stakeholders on questions about the safety of outputs and deployment contexts. This shift in AI safety research and practice will require the design of corporate and public policies that determine how to enact deliberation and the design of interfaces and technical features to translate the outcomes of deliberation into technical development processes. To conclude, I propose roles for the HCI community to ensure deliberative processes inform technology and policy design for LLMs and other societal-scale technology.
翻译:本篇立场论文建议人机交互(HCI)界关注设计审议过程,以便为大型语言模型(LLMs)——一种“社会规模技术”——的技术和政策设计提供信息和协调。首先,我提出了社会规模技术的定义,并将LLMs定位在其中。接下来,我认为现有的确保LLMs安全的流程是不充分的,而且不给系统带来民主合法性。相反,我们需要在用户和其他利益相关者之间进行审议问题,以确定输出和部署环境的安全性。AI安全研究和实践的这种转变将需要制定企业和公共政策,以确定如何实施审议和设计界面和技术功能,以将审议结果转化为技术开发进程。最后,我提出HCI社区的角色,以确保审议过程为LLMs和其他社会规模技术的技术和政策设计提供信息。