Current state-of-the-art image captioning models adopt autoregressive decoders, \ie they generate each word by conditioning on previously generated words, which leads to heavy latency during inference. To tackle this issue, non-autoregressive image captioning models have recently been proposed to significantly accelerate the speed of inference by generating all words in parallel. However, these non-autoregressive models inevitably suffer from large generation quality degradation since they remove words dependence excessively. To make a better trade-off between speed and quality, we introduce a semi-autoregressive model for image captioning~(dubbed as SATIC), which keeps the autoregressive property in global but generates words parallelly in local. Based on Transformer, there are only a few modifications needed to implement SATIC. Extensive experiments on the MSCOCO image captioning benchmark show that SATIC can achieve a better trade-off without bells and whistles. Code is available at {\color{magenta}\url{https://github.com/YuanEZhou/satic}}.
翻译:目前最先进的图像字幕模型采用自动递减解码器, \ 它们以先前生成的单词为条件生成每个单词, 从而导致在推断过程中出现严重延迟。 为了解决这一问题, 最近提出了非自动递减式图像字幕模型, 以通过平行生成所有单词来大大加快推论速度。 但是, 这些非递减型模型不可避免地会因代代代相传的质量大幅退化而蒙受巨大损失, 因为它们消除了对单词的过度依赖性。 为了在速度和质量之间实现更好的平衡, 我们引入了一个图像字幕( 以 SATIC ) 的半自动递增模式, 将自动递增属性保留在全球, 并同时生成本地的单词 。 基于变换器, 实施 SAPTIC, 只需要几处修改 。 在 MSCOCO 图像字幕描述基准上进行的广泛实验显示, SATIC 可以实现更好的交易, 没有钟和哨子。 代码可在 \ colora { mentro as://github. com/ YuanEhou/stical_ 。