Recently, an increasing number of works have introduced models capable of generating natural language explanations (NLEs) for their predictions on vision-language (VL) tasks. Such models are appealing because they can provide human-friendly and comprehensive explanations. However, there is still a lack of unified evaluation approaches for the explanations generated by these models. Moreover, there are currently only few datasets of NLEs for VL tasks. In this work, we introduce e-ViL, a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks. e-ViL spans four models and three datasets. Both automatic metrics and human evaluation are used to assess model-generated explanations. We also introduce e-SNLI-VE, the largest existing VL dataset with NLEs (over 430k instances). Finally, we propose a new model that combines UNITER, which learns joint embeddings of images and text, and GPT-2, a pre-trained language model that is well-suited for text generation. It surpasses the previous state-of-the-art by a large margin across all datasets.
翻译:最近,越来越多的作品引入了能够产生自然语言解释的模型(NLE),用于预测视觉语言任务。这些模型具有吸引力,因为它们能够提供人类友好和全面的解释。然而,对于这些模型产生的解释,仍然缺乏统一的评价方法。此外,目前只有为数不多的关于VLE任务的NLE数据集。在这项工作中,我们引入了e-VIL,这是可解释的视觉语言任务的基准,它建立了一个统一的评价框架,并且提供了对产生VL任务NLE的现有方法的第一次全面比较。e-VIL跨越了四个模型和三个数据集。自动指标和人类评价都用于评估模型产生的解释。我们还引入了电子SNLI-VE,这是现有的与NLE(超过430k例)的最大VLE数据集。最后,我们提出了一个新的模型,将UNITER(学习图像和文本的联合嵌入)和GPT-2(GPT-2)结合起来,这是一个经过事先培训的语文模型,非常适合文本生成。它超越了以往所有州的数据范围。