Vision-and-language tasks have increasingly drawn more attention as a means to evaluate human-like reasoning in machine learning models. A popular task in the field is visual question answering (VQA), which aims to answer questions about images. However, VQA models have been shown to exploit language bias by learning the statistical correlations between questions and answers without looking into the image content: e.g., questions about the color of a banana are answered with yellow, even if the banana in the image is green. If societal bias (e.g., sexism, racism, ableism, etc.) is present in the training data, this problem may be causing VQA models to learn harmful stereotypes. For this reason, we investigate gender and racial bias in five VQA datasets. In our analysis, we find that the distribution of answers is highly different between questions about women and men, as well as the existence of detrimental gender-stereotypical samples. Likewise, we identify that specific race-related attributes are underrepresented, whereas potentially discriminatory samples appear in the analyzed datasets. Our findings suggest that there are dangers associated to using VQA datasets without considering and dealing with the potentially harmful stereotypes. We conclude the paper by proposing solutions to alleviate the problem before, during, and after the dataset collection process.
翻译:视觉和语言任务已日益引起更多的注意,作为评价机器学习模型中人性推理的一种方法; 实地的一个流行任务是视觉问答(VQA),目的是回答图像问题; 然而,VQA模型显示,通过学习问答之间的统计相关性而利用语言偏见,而不考虑图像内容:例如,香蕉颜色问题以黄色回答,即使香蕉在图像中是绿色的,即使香蕉的颜色是黄色的; 如果在培训数据中存在社会偏见(例如性别主义、种族主义、能力主义等),那么这一问题可能导致VQA模型学习有害的陈规定型观念; 因此,我们调查五套VQA数据集中的性别和种族偏见。我们在分析中发现,关于妇女和男子的问题,以及有害的性别定型样本的存在有很大差异。同样,我们发现与种族有关的具体属性代表不足,而在分析数据集中可能出现歧视性的样本。我们的研究结果表明,在使用VQA数据集之前,我们发现在考虑和处理有害数据的收集过程之后,在使用图像之前,就提出减少定型数据集的风险。