The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help get a better understanding of the decision-making process of black-box models. However, some recent works challenged saliency's faithfulness in the field of Natural Language Processing (NLP), questioning attention weights' adherence to the true decision-making process of the model. We add to this discussion by evaluating the faithfulness of in-model saliency applied to a video processing task for the first time, namely, temporal colour constancy. We perform the evaluation by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention fails to achieve faithfulness, while confidence, a particular type of in-model visual saliency, succeeds.
翻译:深层学习模式的不透明性限制了其调试和改进。 强化深层模型及其突出的战略,例如关注,据说有助于更好地了解黑箱模式的决策过程。然而,最近的一些工作挑战了在自然语言处理领域的突出忠诚度,质疑对模式真正决策过程的坚持程度。我们通过评价首次应用到视频处理任务中的模型突出度的忠实性,即时间颜色耐久。我们通过适应我们的目标任务来进行评估,对最近国家语言处理项目文献的忠实性进行两次测试,我们改进了这些文献的方法,作为我们贡献的一部分。我们表明,注意力没有达到忠诚,而信任,一种特殊的模范突出度则获得成功。