Backpropagation through time (BPTT) is the standard algorithm for training recurrent neural networks (RNNs), which requires separate simulation phases for the forward and backward passes for inference and learning, respectively. Moreover, BPTT requires storing the complete history of network states between phases, with memory consumption growing proportional to the input sequence length. This makes BPTT unsuited for online learning and presents a challenge for implementation on low-resource real-time systems. Real-Time Recurrent Learning (RTRL) allows online learning, and the growth of required memory is independent of sequence length. However, RTRL suffers from exceptionally high computational costs that grow proportional to the fourth power of the state size, making RTRL computationally intractable for all but the smallest of networks. In this work, we show that recurrent networks exhibiting high activity sparsity can reduce the computational cost of RTRL. Moreover, combining activity and parameter sparsity can lead to significant enough savings in computational and memory costs to make RTRL practical. Unlike previous work, this improvement in the efficiency of RTRL can be achieved without using any approximations for the learning process.
翻译:时间反向分析(BPTT)是培训经常性神经网络的标准算法(BPTT),它要求分别对前向和后向传递进行模拟阶段,以便分别进行推断和学习。此外,BPTT要求将网络状态的完整历史保存在各阶段之间,记忆消耗量与输入序列长度成正比。这使得BPTT不适合在线学习,并且对实施低资源实时系统提出了挑战。实时经常性学习(RTRL)允许在线学习,所需记忆的成长与序列长度无关。然而,RTRL的计算成本极高,与状态大小的第四强力成正比,使RTRL的计算成本变得难以计算,但最小的网络。在这项工作中,我们表明,活动松散的经常性网络可以降低RTRL的计算成本。此外,合并活动和参数的紧张性可以大大节省计算和记忆成本,使RTRL变得实用。与以往的工作不同,RTRL效率的提高可以在不使用任何近值学习过程的情况下实现。</s>