Learning continuously during all model lifetime is fundamental to deploy machine learning solutions robust to drifts in the data distribution. Advances in Continual Learning (CL) with recurrent neural networks could pave the way to a large number of applications where incoming data is non stationary, like natural language processing and robotics. However, the existing body of work on the topic is still fragmented, with approaches which are application-specific and whose assessment is based on heterogeneous learning protocols and datasets. In this paper, we organize the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks. We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications. We also provide a broad empirical evaluation of CL and Recurrent Neural Networks in class-incremental scenario, by testing their ability to mitigate forgetting with a number of different strategies which are not specific to sequential data processing. Our results highlight the key role played by the sequence length and the importance of a clear specification of the CL scenario.
翻译:在所有模式生命周期内不断学习对于在数据分配过程中漂移时运用强有力的机器学习解决方案至关重要。 持续学习(CL)与经常性神经网络的进展可以为大量应用程序铺平道路,因为输入的数据是非静止的,例如自然语言处理和机器人;然而,关于这一专题的现有工作仍然支离破碎,采用具体应用的方法,其评估以不同学习协议和数据集为基础。在本文件中,我们通过对贡献进行分类和审查基准,为顺序数据处理整理关于CL的文献。我们为CL提出了两个基于现有数据集的相继数据的新基准,其特征类似于现实世界应用。我们还对CL和常规神经网络在等级环境假设中进行广泛的实证评价,测试它们是否有能力用与顺序数据处理无关的一些不同战略来减轻忘却。我们的成果突出了CL情景的顺序长度所起的关键作用和清晰说明的重要性。