Language bias is a critical issue in Visual Question Answering (VQA), where models often exploit dataset biases for the final decision without considering the image information. As a result, they suffer from performance drop on out-of-distribution data and inadequate visual explanation. Based on experimental analysis for existing robust VQA methods, we stress the language bias in VQA that comes from two aspects, i.e., distribution bias and shortcut bias. We further propose a new de-bias framework, Greedy Gradient Ensemble (GGE), which combines multiple biased models for unbiased base model learning. With the greedy strategy, GGE forces the biased models to over-fit the biased data distribution in priority, thus makes the base model pay more attention to examples that are hard to solve by biased models. The experiments demonstrate that our method makes better use of visual information and achieves state-of-the-art performance on diagnosing dataset VQA-CP without using extra annotations.
翻译:语言偏见是视觉问答(VQA)中的一个关键问题,在视觉问答(VQA)中,模型经常在不考虑图像信息的情况下利用数据集偏见作出最后决定,结果,它们因分配数据外泄的性能下降和视觉解释不足而受害。根据对现有稳健的VQA方法的实验分析,我们强调VQA中来自两个方面的语言偏见,即分配偏差和捷径偏差。我们进一步提议一个新的脱偏见框架,即Gereedy Gradient Ensemble(GGE),将多种偏见模型结合起来,用于不带偏见的基础模型学习。由于贪婪的战略,GEGE强迫偏向模型过分适应偏颇的数据分布,从而使基础模型更多地注意那些难以通过偏颇的模式解决的例子。实验表明,我们的方法更好地利用视觉信息,并在不使用额外说明的情况下在diagnoset VQA-CP上取得最先进的表现。