Transformers with linearised attention ("linear Transformers") have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the '90s. However, the original FWP formulation is more general than the one of linear Transformers: a slow neural network (NN) continually reprograms the weights of a fast NN with arbitrary NN architectures. In existing linear Transformers, both NNs are feedforward and consist of a single layer. Here we explore new variations by adding recurrence to the slow and fast nets. We evaluate our novel recurrent FWPs (RFWPs) on two synthetic algorithmic tasks (code execution and sequential ListOps), Wikitext-103 language models, and on the Atari 2600 2D game environment. Our models exhibit properties of Transformers and RNNs. In the reinforcement learning setting, we report large improvements over LSTM in several Atari games. Our code is public.
翻译:具有线性关注的变异器(“线性变换器 ” ) 已经展示了90年代外产品快速重力程序员(FWPs)的实际可扩展性和有效性。 但是,原FWP的配方比线性变异器的配方更为一般:慢神经网络(NN)不断重编一个带有任意的NNN结构的快速NN的重量。在现有的线性变异器中,NNP都是向前进的,由单一的一层组成。在这里,我们探索新的变异,在慢速网中添加复现。我们评估了两个合成算法任务(代码执行和顺序 ListOps)、Wikextext-103 语言模型和Atari 2600 2D 游戏环境。我们的模型展示了变异器和RNNS的特性。 在强化学习环境中,我们在几个Atari游戏中报告了LSTM的巨大改进。我们的代码是公开的。