Humans tend to decompose a sentence into different parts like \textsc{sth do sth at someplace} and then fill each part with certain content. Inspired by this, we follow the \textit{principle of modular design} to propose a novel image captioner: learning to Collocate Visual-Linguistic Neural Modules (CVLNM). Unlike the \re{widely used} neural module networks in VQA, where the language (\ie, question) is fully observable, \re{the task of collocating visual-linguistic modules is more challenging.} This is because the language is only partially observable, for which we need to dynamically collocate the modules during the process of image captioning. To sum up, we make the following technical contributions to design and train our CVLNM: 1) \textit{distinguishable module design} -- \re{four modules in the encoder} including one linguistic module for function words and three visual modules for different content words (\ie, noun, adjective, and verb) and another linguistic one in the decoder for commonsense reasoning, 2) a self-attention based \textit{module controller} for robustifying the visual reasoning, 3) a part-of-speech based \textit{syntax loss} imposed on the module controller for further regularizing the training of our CVLNM. Extensive experiments on the MS-COCO dataset show that our CVLNM is more effective, \eg, achieving a new state-of-the-art 129.5 CIDEr-D, and more robust, \eg, being less likely to overfit to dataset bias and suffering less when fewer training samples are available. Codes are available at \url{https://github.com/GCYZSL/CVLMN}
翻译:人类倾向于将句子分解到不同部分, 比如\ textsc{ sth dosth 在某处}, 然后填充每部分的内容。 受此启发, 我们遵循模块设计的原则} 来提议一个新的图像说明器 : 学习调和视觉语言神经模块( CVLNM ) 。 不同于 VQA 的\ re{ 广泛使用} 神经模块网络。 与 VQA 的语言 (\ i, question) 完全可见,\re{ 校对视觉语言模块的任务更具有挑战性。 } 这是因为语言只是部分可观察性, 我们需要在图像描述过程中动态地对模块进行校对 。 总之, 我们为设计和培训我们的 CVLNMMM模块做出以下技术贡献: 1)\ textitle{ dregive} 模块 -- reformation{ formormormation} 包括一个功能词语言模块 (\, non, adive, crechalal disal disal) 和三个视觉模块可能显示一个常规的C- devial devilational devial 3) 。