Automatic captioning of images is a task that combines the challenges of image analysis and text generation. One important aspect in captioning is the notion of attention: How to decide what to describe and in which order. Inspired by the successes in text analysis and translation, previous work have proposed the \textit{transformer} architecture for image captioning. However, the structure between the \textit{semantic units} in images (usually the detected regions from object detection model) and sentences (each single word) is different. Limited work has been done to adapt the transformer's internal architecture to images. In this work, we introduce the \textbf{\textit{image transformer}}, which consists of a modified encoding transformer and an implicit decoding transformer, motivated by the relative spatial relationship between image regions. Our design widen the original transformer layer's inner architecture to adapt to the structure of images. With only regions feature as inputs, our model achieves new state-of-the-art performance on both MSCOCO offline and online testing benchmarks.
翻译:图像自动字幕是一项将图像分析与文本生成的挑战结合起来的任务。 字幕制作的一个重要方面是关注的概念: 如何决定描述和排序。 在文本分析和翻译的成功经验的启发下, 先前的工作为图像字幕提出了\ textit{ transformed} 结构。 然而, 图像( 通常是从物体检测模型中检测到的区域) 和句子( 每个单词) 之间的结构是不同的。 将变压器的内部结构与图像相适应的工作有限。 在这项工作中, 我们引入了\ textbf_ textliit{image 变压器, 由修改的编码变压器和隐含解码变压器组成, 其动机是图像区域之间的相对空间关系。 我们的设计扩大了原始变压器层的内部结构以适应图像结构。 我们的模式只有区域作为投入特征, 我们的模型在离线和在线测试基准上实现了新的状态。