Methods proposed in the literature towards continual deep learning typically operate in a task-based sequential learning setup. A sequence of tasks is learned, one at a time, with all data of current task available but not of previous or future tasks. Task boundaries and identities are known at all times. This setup, however, is rarely encountered in practical applications. Therefore we investigate how to transform continual learning to an online setup. We develop a system that keeps on learning over time in a streaming fashion, with data distributions gradually changing and without the notion of separate tasks. To this end, we build on the work on Memory Aware Synapses, and show how this method can be made online by providing a protocol to decide i) when to update the importance weights, ii) which data to use to update them, and iii) how to accumulate the importance weights at each update step. Experimental results show the validity of the approach in the context of two applications: (self-)supervised learning of a face recognition model by watching soap series and learning a robot to avoid collisions.
翻译:文献中为持续深层次学习而提出的方法通常在基于任务的连续学习设置中运作。 学习一系列任务,一次学习所有关于当前任务的数据,一次学习所有关于以前或未来任务的数据。 任务界限和身份在任何时候都是已知的。 但是,这种设置在实际应用中很少遇到。 因此,我们调查如何将持续学习转变为在线设置。 我们开发了一个系统,以流传方式不断学习,数据分布逐渐变化,而没有不同任务的概念。 为此,我们以记忆意识合成工作为基础,通过提供协议,决定(一) 何时更新重要性加权,(二) 哪些数据用于更新,以及(三) 如何在每个更新阶段积累重要加权。实验结果显示,在两种应用中,方法(自我)通过观察肥皂系列和学习机器人避免碰撞,对面识别模型的超常学习是有效的。