Recently, caption generation with an encoder-decoder framework has been extensively studied and applied in different domains, such as image captioning, code captioning, and so on. In this paper, we propose a novel architecture, namely Auto-Reconstructor Network (ARNet), which, coupling with the conventional encoder-decoder framework, works in an end-to-end fashion to generate captions. ARNet aims at reconstructing the previous hidden state with the present one, besides behaving as the input-dependent transition operator. Therefore, ARNet encourages the current hidden state to embed more information from the previous one, which can help regularize the transition dynamics of recurrent neural networks (RNNs). Extensive experimental results show that our proposed ARNet boosts the performance over the existing encoder-decoder models on both image captioning and source code captioning tasks. Additionally, ARNet remarkably reduces the discrepancy between training and inference processes for caption generation. Furthermore, the performance on permuted sequential MNIST demonstrates that ARNet can effectively regularize RNN, especially on modeling long-term dependencies. Our code is available at: https://github.com/chenxinpeng/ARNet
翻译:最近,我们广泛研究并在不同领域,例如图像字幕、代码字幕等不同领域应用了编代码框架的字幕生成。在本文中,我们提议了一个新的结构,即Auto-Reconstructor Net(ARNet),它与传统的编代代码框架结合,以端到端的方式工作产生字幕。ARNet旨在用当前的方式重建先前的隐藏状态,除了作为依赖输入的过渡操作者外。因此,ARNet鼓励当前隐蔽状态嵌入来自先前的信息,这将有助于规范经常性神经网络(RNN)的过渡动态。广泛的实验结果显示,我们提议的ARNet提高了现有关于图像字幕和源代码说明任务的编码解码模型的性能。此外,ARNet旨在显著缩小目前与当前状态的培训与推断过程之间的差异。此外,MNIST的连续运行表明,ARNet能够有效地规范编译源网络,特别是模拟长期依赖性网络。我们的代码可在:http://sgi/bg/ https://semblys/ accurations commations droductions droductionations happlementationsationsations