For AI technology to fulfill its full promises, we must design effective mechanisms into the AI systems to support responsible AI behavior and curtail potential irresponsible use, e.g. in areas of privacy protection, human autonomy, robustness, and prevention of biases and discrimination in automated decision making. In this paper, we present a framework that provides computational facilities for parties in a social ecosystem to produce the desired responsible AI behaviors. To achieve this goal, we analyze AI systems at the architecture level and propose two decentralized cryptographic mechanisms for an AI system architecture: (1) using Autonomous Identity to empower human users, and (2) automating rules and adopting conventions within social institutions. We then propose a decentralized approach and outline the key concepts and mechanisms based on Decentralized Identifier (DID) and Verifiable Credentials (VC) for a general-purpose computational infrastructure to realize these mechanisms. We argue the case that a decentralized approach is the most promising path towards Responsible AI from both the computer science and social science perspectives.
翻译:为使AI技术充分履行其全部承诺,我们必须在AI系统中设计有效的机制,以支持负责任的AI行为,并减少潜在不负责任的使用,例如在隐私保护、人类自主、稳健性、防止自动决策中的偏见和歧视等领域。我们在本文件中提出了一个框架,为社会生态系统中的各方提供计算设施,以产生期望的负责任的AI行为。为了实现这一目标,我们在架构一级分析AI系统,并为AI系统架构提出两个分散的加密机制:(1) 利用自主身份赋予人类用户权力,(2) 使规则自动化,并在社会机构内通过公约。然后我们提出一种分散的方法,并概述基于分散化识别和可验证的认证的主要概念和机制,用于实现这些机制的通用计算基础设施。我们认为,从计算机科学和社会科学的角度讲,分散化的方法是实现负责任的AI的最有希望的道路。