Novel Object Captioning is a zero-shot Image Captioning task requiring describing objects not seen in the training captions, but for which information is available from external object detectors. The key challenge is to select and describe all salient detected novel objects in the input images. In this paper, we focus on this challenge and propose the ECOL-R model (Encouraging Copying of Object Labels with Reinforced Learning), a copy-augmented transformer model that is encouraged to accurately describe the novel object labels. This is achieved via a specialised reward function in the SCST reinforcement learning framework (Rennie et al., 2017) that encourages novel object mentions while maintaining the caption quality. We further restrict the SCST training to the images where detected objects are mentioned in reference captions to train the ECOL-R model. We additionally improve our copy mechanism via Abstract Labels, which transfer knowledge from known to novel object types, and a Morphological Selector, which determines the appropriate inflected forms of novel object labels. The resulting model sets new state-of-the-art on the nocaps (Agrawal et al., 2019) and held-out COCO (Hendricks et al., 2016) benchmarks.
翻译:新对象说明是一个零光图像说明任务,要求描述培训说明中没有看到但外部物体探测器可以提供信息的物体。 关键挑战是选择和描述输入图像中所有突出检测到的新对象。 在本文中,我们重点关注这一挑战,并提议ECOL-R模型(鼓励以强化学习方式复制对象标签),这是鼓励准确描述新对象标签的复制放大变压器模型。这是通过STSTST强化学习框架中的专门奖赏功能(Rennie等人,2017年)实现的,该奖赏功能鼓励在保持标题质量的同时提及新对象。我们进一步将STST培训限制在引用说明中提及被检测到的物体以培训ECOL-R模型的图像。我们通过抽象实验室进一步改进我们的复制机制,该模型将知识从已知的物体类型转移到新对象类型,以及确定新对象标签的适当隐蔽形式。因此产生的模型在 nocap(Arickwal等人,2019年)上设置了新的状态的CO-Art-Art, 以及2019年的CO-Hend等人基准(Arickal等人,2019年)和2019年的CO-Hout等基准。