During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.
翻译:在一个研究项目中,我们为非ML专家开发了一个机器学习(ML)驱动的可视化系统,我们在这个研究项目中反思了ML的可解释性研究、计算机支持的协作工作以及人与计算机的互动。我们发现,虽然有多种技术方法,这些方法往往侧重于ML专家,并在非技术性的经验性研究中加以评估。我们假设,参与性设计研究有助于了解项目中利益攸关方所处的感知,然而,在完全的ML解释性方面找到了指导。我们根据技术哲学,制定了解释战略,作为经验分析的透镜,说明技术解释如何调解与人们解释有关的背景偏好。在本文中,我们提交了一份报告,说明我们利用解释战略的证明概念来分析与非ML专家共同设计的讲习班,对参与性设计研究所涉的方法问题,为非ML专家的解释设计影响,并建议进一步调查ML可解释性空间的技术调解理论。