The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.
翻译:自三年多前引入以来,研究界已提议对变异器结构进行大量修改,其中只有少数得到广泛采用。在本文中,我们在一个共同的实验环境中全面评价了其中的许多修改,其中包括变异器在自然语言处理中的大部分常见用途。令人惊讶的是,我们发现,大多数修改没有有效地改善性能。此外,我们认为有益的变异器大多数要么是在我们使用的同一代码库中开发的,要么是相对较小的变化。我们推测,改进性能可能在很大程度上取决于执行细节,并相应地提出一些建议,以改善实验结果的普遍性。