Debugging a machine learning model is hard since the bug usually involves the training data and the learning process. This becomes even harder for an opaque deep learning model if we have no clue about how the model actually works. In this survey, we review papers that exploit explanations to enable humans to give feedback and debug NLP models. We call this problem explanation-based human debugging (EBHD). In particular, we categorize and discuss existing work along three dimensions of EBHD (the bug context, the workflow, and the experimental setting), compile findings on how EBHD components affect the feedback providers, and highlight open problems that could be future research directions.
翻译:机器学习模式的调试非常困难, 因为错误通常涉及培训数据和学习过程。 如果我们对不透明的深层次学习模式的实际效果一无所知, 这对于不透明的学习模式来说就更加困难。 在本次调查中, 我们审查利用解释来帮助人类提供反馈和调试NLP模型的文件。 我们称之为基于问题的基于解释的人类调试( EBHD ) 。 特别是, 我们按照 EBHD 的三个方面( 错误背景、 工作流程和实验设置) 进行分类和讨论现有工作, 汇编关于 EBHD 组成部分如何影响反馈提供者的研究结果, 并突出可以成为未来研究方向的公开问题 。