A wide variety of methods have been developed to enable lifelong learning in conventional deep neural networks. However, to succeed, these methods require a `batch' of samples to be available and visited multiple times during training. While this works well in a static setting, these methods continue to suffer in a more realistic situation where data arrives in \emph{online streaming manner}. We empirically demonstrate that the performance of current approaches degrades if the input is obtained as a stream of data with the following restrictions: $(i)$ each instance comes one at a time and can be seen only once, and $(ii)$ the input data violates the i.i.d assumption, i.e., there can be a class-based correlation. We propose a novel approach (CIOSL) for the class-incremental learning in an \emph{online streaming setting} to address these challenges. The proposed approach leverages implicit and explicit dual weight regularization and experience replay. The implicit regularization is leveraged via the knowledge distillation, while the explicit regularization incorporates a novel approach for parameter regularization by learning the joint distribution of the buffer replay and the current sample. Also, we propose an efficient online memory replay and replacement buffer strategy that significantly boosts the model's performance. Extensive experiments and ablation on challenging datasets show the efficacy of the proposed method.
翻译:开发了多种方法,以便在传统的深神经网络中进行终身学习。然而,为了取得成功,这些方法需要“批量”样本才能在培训期间提供并多次访问。在静态环境下,这些方法仍然在更现实的情况下发挥作用,数据以 emph{online流方式到达时,这些方法仍然在更现实的情况下受到影响。我们从经验上证明,如果输入是以数据流获得的、具有以下限制的数据流,则当前方法的绩效会下降:每例输入1美元一次,只能一次看到一次,而输入数据则违反i.d假设,即:可能存在基于阶级的关联性。我们提出一个新的方法(CIOSL),用于以 \emph{online流方式进行年级学习,以应对这些挑战。拟议方法利用隐含和明确的双重重量调整和经验重现。隐含的正规化通过知识蒸馏加以利用,而明确的正规化则包含一种新颖的参数正规化方法,通过学习缓冲缓冲缓冲战略的联合分配,并大幅提升当前业绩样本。我们提议在缓冲缓冲式模型上重新展示一个升级。