We introduce a variety of models, trained on a supervised image captioning corpus to predict the image features for a given caption, to perform sentence representation grounding. We train a grounded sentence encoder that achieves good performance on COCO caption and image retrieval and subsequently show that this encoder can successfully be transferred to various NLP tasks, with improved performance over text-only models. Lastly, we analyze the contribution of grounding, and show that word embeddings learned by this system outperform non-grounded ones.
翻译:我们引入了多种模型,这些模型在受监督的图像字幕保护上接受培训,以预测某个字幕的图像特征,并进行句子代表。我们培训了一种在COCO字幕和图像检索上取得良好表现的有固定基础的编码器,并随后表明该编码器可以成功地被转移到各种NLP任务中,其性能高于文本模型。最后,我们分析了地基的贡献,并展示了这个系统所学的字嵌入的字性优于没有基础的词性。