Recent work (Feng et al., 2018) establishes the presence of short, uninterpretable input fragments that yield high confidence and accuracy in neural models. We refer to these as Minimal Prediction Preserving Inputs (MPPIs). In the context of question answering, we investigate competing hypotheses for the existence of MPPIs, including poor posterior calibration of neural models, lack of pretraining, and "dataset bias" (where a model learns to attend to spurious, non-generalizable cues in the training data). We discover a perplexing invariance of MPPIs to random training seed, model architecture, pretraining, and training domain. MPPIs demonstrate remarkable transferability across domains achieving significantly higher performance than comparably short queries. Additionally, penalizing over-confidence on MPPIs fails to improve either generalization or adversarial robustness. These results suggest the interpretability of MPPIs is insufficient to characterize generalization capacity of these models. We hope this focused investigation encourages more systematic analysis of model behavior outside of the human interpretable distribution of examples.
翻译:最近的工作(Feng等人,2018年)确定了短期、不可解释的输入碎片的存在,在神经模型中产生了高度的信心和准确性。我们称之为最小预测保留输入(MPPI),在回答问题时,我们调查了对移动电话投入存在的相互竞争的假设,包括神经模型的后表校准差、缺乏预先训练以及“数据偏差”(模型学会在培训数据中注意虚假、无法概括的提示)。我们发现,移动电话投入在随机培训种子、模型结构、培训前和培训领域都存在混淆。移动电话投入在各领域的可移动性显著提高性比相对短期的查询高得多。此外,对移动电话投入的过度信任未能改善一般化或对抗性强性。这些结果表明,移动电话投入的可解释性不足以描述这些模型的概括性能力。我们希望,这种重点调查能够鼓励在人类可解释性实例分布之外更系统地分析模型行为。