Explainable AI (XAI) aims to make AI systems more transparent, yet many practices emphasise mathematical rigour over practical user needs. We propose an alternative to this model-centric approach by following a design thinking process for the emerging XAI field of training data attribution (TDA), which risks repeating solutionist patterns seen in other subfields. However, because TDA is in its early stages, there is a valuable opportunity to shape its direction through user-centred practices. We engage directly with machine learning developers via a needfinding interview study (N=6) and a scenario-based interactive user study (N=31) to ground explanations in real workflows. Our exploration of the TDA design space reveals novel tasks for data-centric explanations useful to developers, such as grouping training samples behind specific model behaviours or identifying undersampled data. We invite the TDA, XAI, and HCI communities to engage with these tasks to strengthen their research's practical relevance and human impact.
翻译:可解释人工智能(XAI)旨在提升人工智能系统的透明度,然而当前许多实践过于强调数学严谨性而忽视了实际用户需求。针对新兴的训练数据归因(TDA)这一XAI子领域,我们提出一种替代传统以模型为中心的研究范式,采用设计思维流程以避免重蹈其他子领域已出现的“解决方案主义”覆辙。由于TDA尚处发展初期,通过以用户为中心的实践来引导其研究方向具有重要价值。我们通过需求发现访谈研究(N=6)和基于场景的交互式用户研究(N=31)直接与机器学习开发者对接,将解释机制植根于真实工作流程。对TDA设计空间的探索揭示了面向开发者的数据中心解释新任务,例如对特定模型行为背后的训练样本进行聚类,或识别采样不足的数据。我们呼吁TDA、XAI及人机交互(HCI)领域的研究者共同参与这些任务,以增强研究的实践相关性与人文影响力。