We introduce the Block-Recurrent Transformer, which applies a transformer layer in a recurrent fashion along a sequence, and has linear complexity with respect to sequence length. Our recurrent cell operates on blocks of tokens rather than single tokens during training, and leverages parallel computation within a block in order to make efficient use of accelerator hardware. The cell itself is strikingly simple. It is merely a transformer layer: it uses self-attention and cross-attention to efficiently compute a recurrent function over a large set of state vectors and tokens. Our design was inspired in part by LSTM cells, and it uses LSTM-style gates, but it scales the typical LSTM cell up by several orders of magnitude. Our implementation of recurrence has the same cost in both computation time and parameter count as a conventional transformer layer, but offers dramatically improved perplexity in language modeling tasks over very long sequences. Our model out-performs a long-range Transformer XL baseline by a wide margin, while running twice as fast. We demonstrate its effectiveness on PG19 (books), arXiv papers, and GitHub source code. Our code has been released as open source.
翻译:我们引入了区块- 中流变换器, 在一个序列中反复应用变压器层, 并在序列长度方面具有线性复杂性。 我们的常态单元格在训练期间使用符号区块而不是单一符号, 并在一个区块内利用平行计算, 以便高效使用加速器硬件。 单元格本身非常简单。 它只是一个变压器层: 它使用自我注意和交叉注意, 以有效计算大量状态矢量和符号的重复函数。 我们的设计部分受 LSTM 细胞的启发, 它使用 LSTM 风格的门, 但它将典型的 LSTM 单元格缩小到几个数量级。 我们的重现执行在计算时间和参数计数方面成本相同, 与传统的变压器层相同, 但它在语言模型的模型中大大改进了对非常长的序列的翻转动器 XL 基线, 而我们的模型运行速度为两倍。 我们在 PG19 (book) 、 arrXiv 文档和 GiHub 源码上展示了它的有效性。