Image captioning has made substantial progress with huge supporting image collections sourced from the web. However, recent studies have pointed out that captioning datasets, such as COCO, contain gender bias found in web corpora. As a result, learning models could heavily rely on the learned priors and image context for gender identification, leading to incorrect or even offensive errors. To encourage models to learn correct gender features, we reorganize the COCO dataset and present two new splits COCO-GB V1 and V2 datasets where the train and test sets have different gender-context joint distribution. Models relying on contextual cues will suffer from huge gender prediction errors on the anti-stereotypical test data. Benchmarking experiments reveal that most captioning models learn gender bias, leading to high gender prediction errors, especially for women. To alleviate the unwanted bias, we propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence. Experimental results validate that GAIC can significantly reduce gender prediction errors with a competitive caption quality. Our codes and the designed benchmark datasets are available at https://github.com/datamllab/Mitigating_Gender_Bias_In_Captioning_System.
翻译:然而,最近的研究表明,如COCOCO等标题数据集在网上测试数据中含有性别偏见,因此,学习模式在性别识别方面可能严重依赖所学的前科和图像背景,从而导致错误甚至冒犯性错误。为了鼓励模型来学习正确的性别特征,我们重组COCO数据集,并在火车和测试设备具有不同性别-变量联合分布的地方,提出两个新的分割式的COCO-GB V1和V2数据集。依赖背景提示的模型将因在反陈规定型测试数据中出现巨大的性别预测错误而受到影响。基准测试实验表明,大多数说明模型会学习性别偏见,导致很高的性别预测错误,特别是对妇女来说。为了减轻不想要的偏见,我们提出了一个新的指导性关注图像定位模型(GAIC),该模型为视觉关注提供自我指导,鼓励模型获取正确的性别视觉证据。实验结果证实,GAIC可以大量减少性别预测错误,而具有竞争性说明质量。我们的代码和设计的基准数据集在 http://BIMS_BIMS_BAmbex_BAmbastData。