Image captioning has so far been explored mostly in English, as most available datasets are in this language. However, the application of image captioning should not be restricted by language. Only few studies have been conducted for image captioning in a cross-lingual setting. Different from these works that manually build a dataset for a target language, we aim to learn a cross-lingual captioning model fully from machine-translated sentences. To conquer the lack of fluency in the translated sentences, we propose in this paper a fluency-guided learning framework. The framework comprises a module to automatically estimate the fluency of the sentences and another module to utilize the estimated fluency scores to effectively train an image captioning model for the target language. As experiments on two bilingual (English-Chinese) datasets show, our approach improves both fluency and relevance of the generated captions in Chinese, but without using any manually written sentences from the target language.
翻译:迄今为止,主要用英语对图像字幕进行了探索,因为大多数可用的数据集都使用这种语言。但是,图像字幕的应用不应受到语言的限制。只有很少的研究是在跨语言环境中为图像字幕进行的。与这些手工为目标语言建立数据集的作品不同,我们的目标是从机器翻译的句子中充分学习一种跨语言字幕模型。为了克服翻译句子的流畅性,我们在本文件中建议了一个流畅的引导学习框架。这个框架包括一个自动估计句子流利度的模块和另一个模块,以利用估计流利分有效培训目标语言的图像字幕模型。在两个双语(英语-中文)数据集的实验中,我们的方法提高了中文字幕的流利性和相关性,但没有使用目标语言的任何手动文字句子。