Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing.' There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical and ultimately involves a broad set of complex human factors and multidimensional relationships that can arise between agents, humans, organizations, and even governments and legal institutions, each with their own understanding and definitions of trust. This complexity presents a significant barrier to the development of trustworthy AI and HRI systems---while systems developers may desire to have their systems 'always do the right thing,' they generally lack the practical tools and expertise in law, regulation, policy and ethics to ensure this outcome. In this paper, we emphasize the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment. We hope to contribute to the discussion of trustworthy engineering in AI and HRI by i) describing the policy landscape that must be considered when addressing trustworthy computing and the need for usable trust models, ii) highlighting an opportunity for trustworthy-by-design intervention within the systems engineering process, and iii) introducing the concept of a "policy-as-a-service" (PaaS) framework that can be readily applied by AI systems engineers to address the fuzzy problem of trust during the development and (eventually) runtime process. We envision that the PaaS approach, which offloads the development of policy design parameters and maintenance of policy standards to policy experts, will enable runtime trust capabilities intelligent systems in the wild.
翻译:建立值得信赖的自治系统,除了试图使代理人“总是做正确的事情”之外,还有许多原因都具有挑战性。 在AI和HRI中,通常没有考虑到更广泛的背景:可信度问题本身就具有社会技术性质,最终涉及代理人、人、组织、甚至政府和法律机构之间可能出现的一系列复杂的人的因素和多层面关系,每个机构都有自己的理解和信任定义。这种复杂性是发展值得信赖的AI和HRI系统的重大障碍,而系统开发者可能希望其系统“总是做正确的事情”,他们通常缺乏法律、规章、政策和道德方面的实用工具和专门知识,以确保取得这一结果。在本文件中,我们强调信任性的社会技术方面存在着“模糊的”问题,在设计和部署过程中都需要仔细考虑这些因素。我们希望通过在AI和HRI中讨论值得信赖的工程,通过一)描述在解决可靠的计算和使用的信任模式时必须考虑的政策格局,二)强调在系统、规章、政策和道德伦理学方面进行可信的干预的机会。 (三)在设计过程中,我们经常地将一个“服务-时间-时间- ”概念概念引入一个“动态- —— —— ——我们不断应用的投资者系统—— —— —— —— —— —— —— —— 将一个可以随时应用的、----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------