Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic.
翻译:AI和机器学习(ML)社区对解释的兴趣日益浓厚,目的是提高模型透明度,使用户能够形成经过培训的ML模式的心理模式。然而,解释可以超越这一方法,作为获取用户控制的一种机制,因为一旦用户理解,他们就可以提供反馈。本文件的目的是概述各种研究,将解释与互动能力结合起来,以便从零开始学习新模型,编辑和调试现有模型。为此,我们绘制了最新模型的概念图,根据它们预期的目的和它们之间的互动结构,将相关方法分组,突出它们之间的相似之处和差异。我们还讨论开放式研究问题,概述可能的前进方向,希望激发对这个蓬勃发展的研究专题的进一步研究。