Visual question answering (VQA) is a critical multimodal task in which an agent must answer questions according to the visual cue. Unfortunately, language bias is a common problem in VQA, which refers to the model generating answers only by associating with the questions while ignoring the visual content, resulting in biased results. We tackle the language bias problem by proposing a self-supervised counterfactual metric learning (SC-ML) method to focus the image features better. SC-ML can adaptively select the question-relevant visual features to answer the question, reducing the negative influence of question-irrelevant visual features on inferring answers. In addition, question-irrelevant visual features can be seamlessly incorporated into counterfactual training schemes to further boost robustness. Extensive experiments have proved the effectiveness of our method with improved results on the VQA-CP dataset. Our code will be made publicly available.
翻译:视觉问答(VQA)是一个关键的多模态任务,代理模型必须根据视觉线索回答问题。不幸的是,语言偏见是VQA中经常出现的问题之一,即模型仅通过与问题相关联来生成答案,而忽略了视觉内容,导致出现偏差结果。我们通过提出一种自监督的对抗性度量学习(SC-ML)方法来解决语言偏见问题,以更好地专注于图像特征。SC-ML可以自适应地选择与问题相关的视觉特征来回答问题,减少问题不相关视觉特征对推断答案的负面影响。此外,问题不相关的视觉特征可以无缝地融入到反事实训练方案中,进一步提升鲁棒性。广泛的实验证明了我们方法的有效性,同时在VQA-CP数据集上获得了改进的结果。我们的代码将会公开发布。