Many past works aim to improve visual reasoning in models by supervising feature importance (estimated by model explanation techniques) with human annotations such as highlights of important image regions. However, recent work has shown that performance gains from feature importance (FI) supervision for Visual Question Answering (VQA) tasks persist even with random supervision, suggesting that these methods do not meaningfully align model FI with human FI. In this paper, we show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason (RRR) metrics by optimizing for four key model objectives: (1) accurate predictions given limited but sufficient information (Sufficiency); (2) max-entropy predictions given no important information (Uncertainty); (3) invariance of predictions to changes in unimportant features (Invariance); and (4) alignment between model FI explanations and human FI explanations (Plausibility). Our best performing method, Visual Feature Importance Supervision (VisFIS), outperforms strong baselines on benchmark VQA datasets in terms of both in-distribution and out-of-distribution accuracy. While past work suggests that the mechanism for improved accuracy is through improved explanation plausibility, we show that this relationship depends crucially on explanation faithfulness (whether explanations truly represent the model's internal reasoning). Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful. Lastly, we show that, surprisingly, RRR metrics are not predictive of out-of-distribution model accuracy when controlling for a model's in-distribution accuracy, which calls into question the value of these metrics for evaluating model reasoning. All supporting code is available at https://github.com/zfying/visfis
翻译:许多过去的工作都旨在改进模型的视觉推理,通过监督特征重要性(模型解释技术所估计的)和重要图像区域的亮点等人文说明,改进模型的视觉推理;然而,最近的工作表明,通过随机监督,视觉问题解答任务(VQA)的特性重要性(FI)监督工作仍然能带来业绩收益,表明这些方法没有将模型FI与人类FI(VI)进行有意义的调整,表明模型FI监督能够有意义地提高VQA模型的准确性,并通过优化四个关键模型目标((1)准确的预测给出了有限但足够的准确信息(充分性);(2)最高纯度预测没有提供重要信息(不确定性);(3)预测与非重要特征的变化(差异);(4)示范FI解释与人文资料解释(可信程度)之间的一致性。我们最优秀的模型“直观-直观-直观监督”(Visfirth-Surviferation),在精确性评估VQA数据设置时比强的基线,而不是准确性信息(准确性),我们通过精确性解释来显示准确性解释的准确性解释,而准确性解释是正确性解释。过去的工作表明,这是正确性解释的正确性解释的正确性解释。过去的工作表明,过去的工作表明正确性的解释性解释。