Structured capability access ("SCA") is an emerging paradigm for the safe deployment of artificial intelligence (AI). Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems. The aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely. The developer must both restrict how the AI system can be used, and prevent the user from circumventing these restrictions through modification or reverse engineering of the AI system. SCA is most effective when implemented through cloud-based AI services, rather than disseminating AI software that runs locally on users' hardware. Cloud-based interfaces provide the AI developer greater scope for controlling how the AI system is used, and for protecting against unauthorized modifications to the system's design. This chapter expands the discussion of "publication norms" in the AI community, which to date has focused on the question of how the informational content of AI research projects should be disseminated (e.g., code and models). Although this is an important question, there are limits to what can be achieved through the control of information flows. SCA views AI software not only as information that can be shared but also as a tool with which users can have arm's length interactions. There are early examples of SCA being practiced by AI developers, but there is much room for further development, both in the functionality of cloud-based interfaces and in the wider institutional framework.
翻译:结构化能力访问(“SCA”)是安全部署人工智能的新兴范例。开发者不是公开传播人工智能系统,而是促进受控的、与人工智能系统的正常互动。其目的是防止危险的人工智能能力被广泛使用,同时保留安全使用可安全使用的人工智能能力。开发者必须限制如何使用人工智能系统,防止用户通过修改或逆向工程系统规避这些限制。如果通过基于云的人工智能服务实施,而不是传播本地使用用户硬件的人工智能软件,则该标准最为有效。基于云的界面为AI开发者提供了更大的范围,以控制如何使用人工智能系统,防止未经授权地修改系统的设计。本章扩大了AI界对“公布规范”的讨论,迄今为止,该讨论的重点是如何传播AI研究项目的信息内容(例如,代码和模型)。尽管这是一个重要问题,但通过控制信息流动可以实现的软件的有限性。基于云基界面为AI软件的开发者提供了更广泛的互动空间,但该软件不仅作为机构性互动工具,而且作为机构性开发者的早期工具,也可以作为机构性互动空间。