We introduce a new method for internal replay that modulates the frequency of rehearsal based on the depth of the network. While replay strategies mitigate the effects of catastrophic forgetting in neural networks, recent works on generative replay show that performing the rehearsal only on the deeper layers of the network improves the performance in continual learning. However, the generative approach introduces additional computational overhead, limiting its applications. Motivated by the observation that earlier layers of neural networks forget less abruptly, we propose to update network layers with varying frequency using intermediate-level features during replay. This reduces the computational burden by omitting computations for both deeper layers of the generator and earlier layers of the main model. We name our method Progressive Latent Replay and show that it outperforms Internal Replay while using significantly fewer resources.
翻译:我们引入了一种新的内部重播方法,根据网络深度调整排练频率。 在重播战略减轻神经网络中灾难性遗忘的影响的同时, 最近的基因重播工程显示, 只在网络更深层进行排练可以提高持续学习的性能。 但是, 基因重播方法引入了额外的计算管理, 限制了其应用 。 由早期神经网络层不那么突然地遗忘的观察所驱动, 我们提议在重播时使用中间级功能以不同频率更新网络层。 这通过省略生成器更深层的计算和主模型更早期的层来减少计算负担。 我们命名了我们的方法“ 进步前置重播 ”, 并显示它在使用更少的资源的同时优于内部重播。