Image captioning transforms complex visual information into abstract natural language for representation, which can help computers understanding the world quickly. However, due to the complexity of the real environment, it needs to identify key objects and realize their connections, and further generate natural language. The whole process involves a visual understanding module and a language generation module, which brings more challenges to the design of deep neural networks than other tasks. Neural Architecture Search (NAS) has shown its important role in a variety of image recognition tasks. Besides, RNN plays an essential role in the image captioning task. We introduce a AutoCaption method to better design the decoder module of the image captioning where we use the NAS to design the decoder module called AutoRNN automatically. We use the reinforcement learning method based on shared parameters for automatic design the AutoRNN efficiently. The search space of the AutoCaption includes connections between the layers and the operations in layers both, and it can make AutoRNN express more architectures. In particular, RNN is equivalent to a subset of our search space. Experiments on the MSCOCO datasets show that our AutoCaption model can achieve better performance than traditional hand-design methods.
翻译:图像字幕将复杂的视觉信息转换为抽象的自然代表语言,这可以帮助计算机快速理解世界。 但是,由于真实环境的复杂性,它需要识别关键对象并实现它们的连接,并进一步生成自然语言。整个过程涉及视觉理解模块和语言生成模块,这给深神经网络的设计带来比其他任务更多的挑战。神经结构搜索(NAS)在各种图像识别任务中显示了它的重要作用。此外,NNN在图像描述任务中起着重要作用。我们引入了自动功能化方法,以更好地设计图像说明的解码模块,我们在这里使用NAS来自动设计名为AutoRNN的解码模块。我们使用基于共享参数的强化学习方法来自动设计AutoRNNN。Autaption的搜索空间包括层层层与操作之间的联系,它可以使AutoRNN(AU)表达更多的结构。特别是, RNNN相当于我们搜索空间的一部分。在MSCO数据集上的实验显示,我们的自动描述模型能够比传统的手写方法更好的性。