The work of McCloskey and Cohen popularized the concept of catastrophic interference. They used a neural network that tried to learn addition using two groups of examples as two different tasks. In their case, learning the second task rapidly deteriorated the acquired knowledge about the previous one. We hypothesize that this could be a symptom of a fundamental problem: addition is an algorithmic task that should not be learned through pattern recognition. Therefore, other model architectures better suited for this task would avoid catastrophic forgetting. We use a neural network with a different architecture that can be trained to recover the correct algorithm for the addition of binary numbers. This neural network includes conditional clauses that are naturally treated within the back-propagation algorithm. We test it in the setting proposed by McCloskey and Cohen and training on random additions one by one. The neural network not only does not suffer from catastrophic forgetting but it improves its predictive power on unseen pairs of numbers as training progresses. We also show that this is a robust effect, also present when averaging many simulations. This work emphasizes the importance that neural network architecture has for the emergence of catastrophic forgetting and introduces a neural network that is able to learn an algorithm.
翻译:McCloskey和Cohen的工作普及了灾难性干扰的概念。 他们使用神经网络, 试图用两组例子来学习附加, 作为两种不同的任务。 在他们的情况中, 学习第二个任务迅速恶化了对前一个任务获得的知识。 我们假设这可能是一个根本问题的症状: 添加是一个算法任务, 不应该通过模式识别来学习。 因此, 更适合此任务的其他模型结构可以避免灾难性的遗忘。 我们使用一个具有不同结构的神经网络, 可以训练它恢复正确的算法, 以添加二进制数字。 这个神经网络包括条件性的条款, 自然地在后再恢复算法中处理。 我们在由Mcloskey和Chonhen提出的设置中测试它, 并逐个进行随机添加培训。 神经网络不仅不会因灾难性的遗忘而受损, 而且还在培训进展中提高了对未知数字组合的预测能力。 我们还表明, 这是一个强大的效果, 也是在很多模拟时出现的。 这项工作强调了神经网络结构对于灾难性的出现的重要性, 并且能够学习一个神经网络的算法。