Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.
翻译:关注机制构成若干成功的深层学习结构的核心组成部分,并且基于一个关键概念:“产出仅取决于投入的一小部分(但未知的)部分。”在图像说明和语言翻译等若干实际应用中,这大多是真实的。在经过培训的具有关注机制的模型中,对产出负责的投入部分进行编码的中间模块的输出往往被用作偷窥网络`理由'的一种方法。我们为分类问题的变式更精确了这样一个概念,我们在使用关注模型结构时使用该变式来表示选择性依赖分类(SDC)。在这种设置下,我们展示了各种错误模式,关注模式可以准确,但不能解释,并表明这种模型确实作为培训的结果出现。我们用我们客观的关于可解释性的定义来说明这种行为。最后,我们用SDC任务的可解释性定义来评价几种关注模式学习算法,目的是鼓励灵敏度,并证明这些算法有助于改进解释性。