Oversight is rightly recognised as vital within high-stakes public sector AI applications, where decisions can have profound individual and collective impacts. Much current thinking regarding forms of oversight mechanisms for AI within the public sector revolves around the idea of human decision makers being 'in-the-loop' and thus being able to intervene to prevent errors and potential harm. However, in a number of high-stakes public sector contexts, operational oversight of decisions is made by expert teams rather than individuals. The ways in which deployed AI systems can be integrated into these existing operational team oversight processes has yet to attract much attention. We address this gap by exploring the impacts of AI upon pre-existing oversight of clinical decision-making through institutional analysis. We find that existing oversight is nested within professional training requirements and relies heavily upon explanation and questioning to elicit vital information. Professional bodies and liability mechanisms also act as additional levers of oversight. These dimensions of oversight are impacted, and potentially reconfigured, by AI systems. We therefore suggest a broader lens of 'team-in-the-loop' to conceptualise the system-level analysis required for adoption of AI within high-stakes public sector deployment.
翻译:监督在公共部门的高风险AI应用中被认为是至关重要的,因为决策可能对个人和集体产生深远的影响。当前关于公共部门AI监管机制形式的许多思考都围绕着人类决策者“在环路中”和能够干预以防止错误和潜在危害的思想。然而,在一些高风险的公共部门背景下,操作监督决策是由专家团队而非个人进行的。如何将部署的AI系统集成到这些现有操作团队监管流程中,尚未引起多少注意。通过制度分析,我们探讨了AI对临床决策监管的现有影响。我们发现,现有的监督嵌套在专业培训要求之内,且依赖于解释和问题解决,以获得重要信息。专业机构和责任机制也起到了额外的监督作用。这些监督维度受到AI系统的影响,并可能被重新配置。因此,我们建议采用更广阔的“团队在环路中”的视角来概念化高风险公共部门部署中所需的系统级分析。