The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this paper, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human-Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.
翻译:现在人们普遍认为,需要AI系统对其行为作出解释,这是其通过的关键。在本文件中,我们研究了值得信赖的AI问题,并探讨了在实践中提供这种服务的实际意义,重点是卫生保健应用。这一领域的工作通常将值得信赖的AI视为涉及个人用户和AI系统的人-计算机互动问题。然而,我们在这里争辩说,这忽略了组织问责制在人们如何理解和信任AI在社会技术环境中所发挥的重要作用。为了说明组织问责制的重要性,我们在多学科小组会议上介绍了乳腺癌筛查和癌症治疗规划的人类学研究的结果,以表明参与者如何对彼此和他们所加入的组织负责。我们利用这些调查结果丰富目前对可信AI要求的理解,并概述一些备选解决办法,以解决使个人用户和组织都对AI负责的问题。我们最后概述了这项工作对未来工作的影响,即制定可靠的AI,包括在不同应用环境中重新使用我们提出的解决办法的方式。