Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge. The study of CL for sequential patterns revolves around trained recurrent networks. In this work, instead, we introduce CL in the context of Echo State Networks (ESNs), where the recurrent component is kept fixed. We provide the first evaluation of catastrophic forgetting in ESNs and we highlight the benefits in using CL strategies which are not applicable to trained recurrent models. Our results confirm the ESN as a promising model for CL and open to its use in streaming scenarios.
翻译:持续学习是指一种学习设置,即数据是非固定的,模型必须学习而不忘现有知识。连续模式的CL研究围绕经过训练的经常性网络进行。相反,我们在此工作中,在“回声国家网络”中引入CL,其中经常性部分保持不变。我们首先评估了在紧急需要战略中灾难性的遗忘,我们强调使用CL战略的好处,这些战略不适用于经过训练的经常性模式。我们的结果证实,ESN是CL的一个很有希望的模式,在流动情景中可以使用。