Image captioning has made substantial progress with huge supporting image collections sourced from the web. However, recent studies have pointed out that captioning datasets, such as COCO, contain gender bias found in web corpora. As a result, learning models could heavily rely on the learned priors and image context for gender identification, leading to incorrect or even offensive errors. To encourage models to learn correct gender features, we reorganize the COCO dataset and present two new splits COCO-GB V1 and V2 datasets where the train and test sets have different gender-context joint distribution. Models relying on contextual cues will suffer from huge gender prediction errors on the anti-stereotypical test data. Benchmarking experiments reveal that most captioning models learn gender bias, leading to high gender prediction errors, especially for women. To alleviate the unwanted bias, we propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence. Experimental results validate that GAIC can significantly reduce gender prediction errors with a competitive caption quality. Our codes and the designed benchmark datasets are available at https://github.com/CaptionGenderBias2020.
翻译:然而,最近的研究表明,如COCOCO等标题数据集在网络测试数据中含有性别偏见,因此,学习模式在性别识别方面可能严重依赖所学的先前和图像背景,从而导致错误甚至冒犯性错误。为了鼓励模型来学习正确的性别特征,我们重组COCO数据集,并在火车和测试设备具有不同性别-变量联合分布的地方,提出两个新的分割式的COCO-GB V1和V2数据集。依赖背景提示的模型将因在反陈规定型测试数据中出现巨大的性别预测错误而受到影响。基准测试实验表明,大多数说明模型会学习性别偏见,导致性别预测错误很高,特别是对妇女来说。为了减轻不必要的偏见,我们提议一个新的引导关注图像定位模型(GAIC),该模型为视觉关注提供自我指导,鼓励模型获取正确的性别视觉证据。实验结果证实,GAIC能够显著减少性别预测错误,并具有竞争性的描述质量。我们的代码和设计的基准数据集可以在 httpsGirealmental/Basb20。