An ever increasing number of high-stake decisions are made or assisted by automated systems employing brittle artificial intelligence technology. There is a substantial risk that some of these decision induce harm to people, by infringing their well-being or their fundamental human rights. The state-of-the-art in AI systems makes little effort with respect to appropriate documentation of the decision process. This obstructs the ability to trace what went into a decision, which in turn is a prerequisite to any attempt of reconstructing a responsibility chain. Specifically, such traceability is linked to a documentation that will stand up in court when determining the cause of some AI-based decision that inadvertently or intentionally violates the law. This paper takes a radical, yet practical, approach to this problem, by enforcing the documentation of each and every component that goes into the training or inference of an automated decision. As such, it presents the first running workflow supporting the generation of tamper-proof, verifiable and exhaustive traces of AI decisions. In doing so, we expand the DBOM concept into an effective running workflow leveraging confidential computing technology. We demonstrate the inner workings of the workflow in the development of an app to tell poisonous and edible mushrooms apart, meant as a playful example of high-stake decision support.
翻译:当前,越来越多的高风险决策由采用脆弱人工智能技术的自动化系统做出或辅助完成。这些决策中有一部分可能通过侵害个人福祉或基本人权而对人们造成伤害,存在显著风险。现有AI系统在决策过程的适当记录方面投入甚少,这阻碍了追溯决策依据的能力,而追溯能力又是重建责任链条的先决条件。具体而言,这种可追溯性依赖于能够在法庭上有效证明AI决策(无论无意或故意)违法成因的文档记录。本文针对该问题提出一种彻底而实用的解决方案:强制记录参与自动化决策训练或推理的每一个组件的完整文档。由此,本文首次提出可实际运行的、支持生成防篡改、可验证且完备的AI决策溯源记录的工作流程。在此过程中,我们将DBOM概念扩展为利用机密计算技术的有效运行工作流,并通过开发区分有毒与可食用蘑菇的应用程序(作为高风险决策支持的示例性演示)展示了该工作流程的内部运作机制。