Transformer architectures have brought about fundamental changes to computational linguistic field, which had been dominated by recurrent neural networks for many years. Its success also implies drastic changes in cross-modal tasks with language and vision, and many researchers have already tackled the issue. In this paper, we review some of the most critical milestones in the field, as well as overall trends on how transformer architecture has been incorporated into visuolinguistic cross-modal tasks. Furthermore, we discuss its current limitations and speculate upon some of the prospects that we find imminent.
翻译:多年来,由反复出现的神经网络主宰的计算语言领域已经发生了根本性的变化。 其成功还意味着语言和愿景的跨模式任务发生了巨大变化,许多研究人员已经解决这个问题。 本文审视了该领域一些最重要的里程碑,并审视了如何将变压结构纳入通俗跨模式任务的总体趋势。 此外,我们讨论了其目前的局限性,并猜测了我们发现迫在眉睫的一些前景。