Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable. However, most existing XAI techniques are not informed by the understandings of task-specific needs of humans for explanations. Thus, we took a first step toward understanding what forms of XAI humans require in damage detection tasks. We conducted an online crowdsourced study to understand how people explain their own assessments, when evaluating the severity of building damage based on satellite imagery. Through the study with 60 crowdworkers, we surfaced six major strategies that humans utilize to explain their visual damage assessments. We present implications of our findings for the design of XAI methods for such visual detection contexts, and discuss opportunities for future research.
翻译:可以解释的AI(XAI)是支持人类-AI合作执行高超视觉探测任务(如卫星图像中的损害探测任务)的有希望的手段,因为完全自动化的方法不可能完全安全和可靠,然而,大多数现有的XAI技术并不了解人类对具体任务需要的解释。因此,我们迈出了第一步,了解XAI人类在损害探测任务中需要何种形式。我们进行了在线众源研究,以了解人们在根据卫星图像评估建筑损害严重程度时如何解释自己的评估。我们通过与60名观众进行了研究,展示了人类用来解释其视觉损害评估的六大战略。我们介绍了我们的调查结果对设计XAI方法对设计这种视觉探测环境的影响,并讨论了未来研究的机会。