How can neural networks be trained on large-volume temporal data efficiently? To compute the gradients required to update parameters, backpropagation blocks computations until the forward and backward passes are completed. For temporal signals, this introduces high latency and hinders real-time learning. It also creates a coupling between consecutive layers, which limits model parallelism and increases memory consumption. In this paper, we build upon Sideways, which avoids blocking by propagating approximate gradients forward in time, and we propose mechanisms for temporal integration of information based on different variants of skip connections. We also show how to decouple computation and delegate individual neural modules to different devices, allowing distributed and parallel training. The proposed Skip-Sideways achieves low latency training, model parallelism, and, importantly, is capable of extracting temporal features, leading to more stable training and improved performance on real-world action recognition video datasets such as HMDB51, UCF101, and the large-scale Kinetics-600. Finally, we also show that models trained with Skip-Sideways generate better future frames than Sideways models, and hence they can better utilize motion cues.
翻译:如何有效地对神经网络进行大量时间数据的培训? 为了计算更新参数所需的梯度, 后向推进区块计算直到前向和后向传递完成。 对于时间信号来说, 这引入了高潜值和阻碍实时学习。 它还在连续的层层之间制造了一个连接, 从而限制模型平行性并增加记忆消耗。 在本文中, 我们建筑在侧道上, 避免通过在时间上向前传播大约的梯度而阻塞, 我们建议根据跳过连接的不同变体来临时整合信息的机制 。 我们还展示了如何将单个神经模块分解到不同的设备, 允许分布和平行的培训 。 拟议的跳跃式系统可以实现低潜度培训, 模型平行性, 并且重要的是, 能够提取时间特征, 导致更稳定的培训和提高真实世界行动识别视频数据集的性能, 如 HMDB51, UCF101, 和大型基雅特- 600 。 最后, 我们还展示了以跳式模式训练的模型能够产生比侧向模型更好的未来框架, 因此它们可以更好地利用运动。