Many decision making systems deployed in the real world are not static - a phenomenon known as model adaptation takes place over time. The need for transparency and interpretability of AI-based decision models is widely accepted and thus have been worked on extensively. Usually, explanation methods assume a static system that has to be explained. Explaining non-static systems is still an open research question, which poses the challenge how to explain model adaptations.%Explaining non-static systems is still an open research question, posing the challenge how to explain model adaptations. In this contribution, we propose and (empirically) evaluate a framework for explaining model adaptations by contrastive explanations. We also propose a method for automatically finding regions in data space that are affected by a given model adaptation and thus should be explained.
翻译:在现实世界中部署的许多决策系统不是静态的 -- -- 一种被称为示范适应现象的现象会随着时间的推移而发生。基于AI的决策模型的透明度和可解释性的必要性被广泛接受,因此得到了广泛的应用。通常,解释方法假定的是必须解释的静态系统。解释非静态系统仍然是一个开放的研究问题,对如何解释模型适应性提出了挑战。%解释非静态系统仍然是一个开放的研究问题,对如何解释模型适应性提出了挑战。我们在此意见中提议并(生动地)评价一个框架,用对比解释解释解释模型适应性。我们还提出了一个在数据空间中自动找到受特定模型适应影响区域的方法,因此应该加以解释。