Trusted AI literature to date has focused on the trust needs of users who knowingly interact with discrete AIs. Conspicuously absent from the literature is a rigorous treatment of public trust in AI. We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society. Drawing from structuration theory and literature on institutional trust, we offer a model of public trust in AI that differs starkly from models driving Trusted AI efforts. This model provides a theoretical scaffolding for Trusted AI research which underscores the need to develop nothing less than a comprehensive and visibly functioning regulatory ecosystem. We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective, and outline a number of actions that would promote public trust in AI. We discuss how existing efforts to develop AI documentation within organizations -- both to inform potential adopters of AI components and support the deliberations of risk and ethics review boards -- is necessary but insufficient assurance of the trustworthiness of AI. We argue that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of our society.
翻译:迄今,受信任的大赦国际文献侧重于那些知情地与分散的大赦国际互动的用户的信任需要。显然,文献中缺少的是对AI的公众信任的严格对待。我们争辩说,公众对AI的信任源于监管生态系统的开发不足,而监管生态系统的开发不足,这将保证社会上渗透的AI的可信度。根据机构信任的结构性理论和文献,我们提供了一个对AI的公众信任模式,这种信任模式与驱动信任的AI努力的模式截然不同。这一模式为受信任的AI的研究提供了理论依据。这种研究强调需要发展全面而明显运作的监管生态系统。我们阐述了可外部审计的AI文件在这一模式中的关键作用,以及为确保其有效性而有待开展的工作,并概述了将促进公众对AI的信任的若干行动。我们讨论了各组织内部制定AI文件的现有努力 -- -- 既向可能采用AI组成部分的人提供信息,又支持对风险和道德审查委员会的审议 -- -- 如何必要但不足以保证AI的可信度。我们指出,在获得信任的方式方面,必须向公众负责,以获得信任的方式,在这种模式中,通过为AI制定可靠的规则和开发我们社会的可靠规则,最终将获得什么?