Humans excel at continually acquiring, consolidating, and retaining information from an ever-changing environment, whereas artificial neural networks (ANNs) exhibit catastrophic forgetting. There are considerable differences in the complexity of synapses, the processing of information, and the learning mechanisms in biological neural networks and their artificial counterparts, which may explain the mismatch in performance. We consider a biologically plausible framework that constitutes separate populations of exclusively excitatory and inhibitory neurons that adhere to Dale's principle, and the excitatory pyramidal neurons are augmented with dendritic-like structures for context-dependent processing of stimuli. We then conduct a comprehensive study on the role and interactions of different mechanisms inspired by the brain, including sparse non-overlapping representations, Hebbian learning, synaptic consolidation, and replay of past activations that accompanied the learning event. Our study suggests that the employing of multiple complementary mechanisms in a biologically plausible architecture, similar to the brain, may be effective in enabling continual learning in ANNs.
翻译:人类擅长不断获得、巩固和保留来自不断变化的环境中的信息,而人工神经网络(ANNs)则表现出灾难性遗忘。 生物神经网络和其人工对应物之间的突触复杂性、信息处理和学习机制存在显著差异,这可能解释了性能不匹配。 我们考虑了一个生物可行的框架,它由分别属于Dale原理的仅兴奋和抑制神经元的不同种群组成,而兴奋的金字塔形神经元则用树突样结构增强了对环境信号的依赖处理。 然后,我们进行了一项全面的研究,研究了受脑启示的不同机制的角色和相互作用,包括稀疏不重叠表示、赫伯学习、突触巩固和过去激活的重播事件。 我们的研究表明,在类似于大脑的生物可行架构中采用多个互补机制可能是实现ANNs连续学习的有效方法。