The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations. It is known that even small but manually annotated datasets, such as MSCOCO, are affected by societal bias. This problem, far from being solved, may be getting worse with data crawled from the Internet without much control. In addition, the lack of tools to analyze societal bias in big collections of images makes addressing the problem extremely challenging. Our first contribution is to annotate part of the Google Conceptual Captions dataset, widely used for training vision-and-language models, with four demographic and two contextual attributes. Our second contribution is to conduct a comprehensive analysis of the annotations, focusing on how different demographic groups are represented. Our last contribution lies in evaluating three prevailing vision-and-language tasks: image captioning, text-image CLIP embeddings, and text-to-image generation, showing that societal bias is a persistent problem in all of them.
翻译:收集庞大而未经筛选的数据集以训练视觉语言模型的趋势日益增多,引起了公平表示的担忧。已知即使是像MSCOCO这样的小型但手动注释的数据集也受到社会偏见的影响。这个问题远未得到解决,随着从互联网爬取数据而缺乏严格控制,可能会变得更加严重。此外,缺乏分析大量图像中社会偏见的工具也使解决问题变得极其具有挑战性。我们的第一项贡献是使用四个人口统计学和两个上下文属性对广泛用于训练视觉语言模型的Google Conceptual Captions数据集的部分进行注释。我们的第二项贡献是对注释进行全面分析,重点关注不同人口统计群体的代表性。我们的最后一项贡献在于评估三个普遍的视觉语言任务:图像标注、文本-图像CLIP嵌入和文本-图像生成,表明社会偏见在所有任务中都是一个持久的问题。