Controllable Image Captioning is a recent sub-field in the multi-modal task of Image Captioning wherein constraints are placed on which regions in an image should be described in the generated natural language caption. This puts a stronger focus on producing more detailed descriptions, and opens the door for more end-user control over results. A vital component of the Controllable Image Captioning architecture is the mechanism that decides the timing of attending to each region through the advancement of a region pointer. In this paper, we propose a novel method for predicting the timing of region pointer advancement by treating the advancement step as a natural part of the language structure via a NEXT-token, motivated by a strong correlation to the sentence structure in the training data. We find that our timing agrees with the ground-truth timing in the Flickr30k Entities test data with a precision of 86.55% and a recall of 97.92%. Our model implementing this technique improves the state-of-the-art on standard captioning metrics while additionally demonstrating a considerably larger effective vocabulary size.
翻译:控制下图像描述是图像描述多模式任务中最近的一个子领域, 即对图像描述中的哪些区域设置了限制, 应在生成的自然语言标题中描述这些区域。 这更侧重于制作更详细描述, 并为对结果进行更多终端用户控制打开大门。 控制下图像描述架构的一个重要组成部分是决定通过提高一个区域指示器来关注每个区域的时间的机制。 在本文中, 我们提出了一个新的预测区域指示器进步时间的方法, 将进步步骤作为语言结构的自然部分, 通过 NEXT-tok 处理, 其动机是与培训数据中的句号结构密切相关的。 我们发现我们的时间与Flick30k 实体测试数据时的地面真相吻合, 精确度为86.55%, 回收时间为97.92%。 我们采用这一技术的模型改进了标准标注指标的最新技术, 并额外展示了更大程度的词汇大小。