Neural captioners are typically trained to mimic human-generated references without optimizing for any specific communication goal, leading to problems such as the generation of vague captions. In this paper, we show that fine-tuning an out-of-the-box neural captioner with a self-supervised discriminative communication objective helps to recover a plain, visually descriptive language that is more informative about image contents. Given a target image, the system must learn to produce a description that enables an out-of-the-box text-conditioned image retriever to identify such image among a set of candidates. We experiment with the popular ClipCap captioner, also replicating the main results with BLIP. In terms of similarity to ground-truth human descriptions, the captions emerging from discriminative finetuning lag slightly behind those generated by the non-finetuned model, when the latter is trained and tested on the same caption dataset. However, when the model is used without further tuning to generate captions for out-of-domain datasets, our discriminatively-finetuned captioner generates descriptions that resemble human references more than those produced by the same captioner without finetuning. We further show that, on the Conceptual Captions dataset, discriminatively finetuned captions are more helpful than either vanilla ClipCap captions or ground-truth captions for human annotators tasked with an image discrimination task.
翻译:神经式字幕生成器通常受训于模仿人类的参考文本,但没有优化任何特定的通信目标,这导致了生成模糊字幕的问题。本文展示了一种方法,该方法是使用自我监督的区分性通信目标,对一个开箱即用的神经式字幕生成器进行微调,以帮助恢复一种明确的、具有视觉描述性的语言,这种语言更加 informative 有关图像内容。给定目标图像,系统必须学习生成一种描述,以使一个开箱即用的文本条件图像检索器能够在一组候选项中识别此类图像。我们在流行的 ClipCap 和 BLIP 两种字幕生成器上进行了实验,并重复了主要结果。在与非微调模型在相同字幕数据集上训练和测试的情况下,从区分性微调中产生的字幕与非微调模型生成的字幕相比,与人类参考文本的相似度略微落后。但是,当该模型用于生成不同领域的数据集的字幕时,相对于未经微调的字幕生成器,我们进行了区分性微调,生成的描述更类似于人类参考文本。我们还证明,在概念字幕数据集上,相对于 ClipCap 或地面真实字幕,区分性微调的字幕对于完成图像区分任务的人类注释者更加有帮助。