Efficient continual learning in humans is enabled by a rich set of neurophysiological mechanisms and interactions between multiple memory systems. The brain efficiently encodes information in non-overlapping sparse codes, which facilitates the learning of new associations faster with controlled interference with previous associations. To mimic sparse coding in DNNs, we enforce activation sparsity along with a dropout mechanism which encourages the model to activate similar units for semantically similar inputs and have less overlap with activation patterns of semantically dissimilar inputs. This provides us with an efficient mechanism for balancing the reusability and interference of features, depending on the similarity of classes across tasks. Furthermore, we employ sparse coding in a multiple-memory replay mechanism. Our method maintains an additional long-term semantic memory that aggregates and consolidates information encoded in the synaptic weights of the working model. Our extensive evaluation and characteristics analysis show that equipped with these biologically inspired mechanisms, the model can further mitigate forgetting.
翻译:人类的高效持续学习是由一系列丰富的神经生理机制和多个记忆系统之间的相互作用所促成的。大脑有效地将信息编码成非重叠的稀有代码,这有助于学习新协会,对以前的协会进行控制干扰,从而更快地学习新协会。要模仿DNN的稀有编码,我们实施激活宽度机制,同时采用一种辍学机制,鼓励模型激活类似单元,用于进行语义相似的输入,并减少与语义不同输入的激活模式的重叠。这为我们提供了一个有效的机制,根据不同任务班级的相似性,平衡特征的可重复性和干扰性。此外,我们还在多模重重弹机制中使用了稀疏的编码。我们的方法保持了额外的长期语义记忆,将信息汇总并整合到工作模型的合成重量中。我们的广泛评估和特征分析显示,这些生物启发机制装备了这些模型,可以进一步减少遗忘。