Neural networks have been criticised for their inability to perform continual learning due to catastrophic forgetting and rapid unlearning of a past concept when a new concept is introduced. Catastrophic forgetting can be alleviated by specifically designed models and training techniques. This paper outlines a novel Spline Additive Model (SAM). SAM exhibits intrinsic memory retention with sufficient expressive power for many practical tasks, but is not a universal function approximator. SAM is extended with the Kolmogorov-Arnold representation theorem to a novel universal function approximator, called the Kolmogorov-Arnold Spline Additive Model - KASAM. The memory retention, expressive power and limitations of SAM and KASAM are illustrated analytically and empirically. SAM exhibited robust but imperfect memory retention, with small regions of overlapping interference in sequential learning tasks. KASAM exhibited greater susceptibility to catastrophic forgetting. KASAM in combination with pseudo-rehearsal training techniques exhibited superior performance in regression tasks and memory retention.
翻译:神经网络由于灾难性的遗忘和在引入新概念时迅速不学习过去的概念而无法继续学习,因此受到批评。灾难性的遗忘可以通过专门设计的模型和培训技巧来减轻。本文件概述了一个小说《Spline Additive 模型》(《SAM 模型》)。SAM展示了内含的记忆保留,对许多实际任务具有足够的表达力,但并不是一个通用功能近似体。SAM与Kolmogorov-Arnold的代言词扩展为一个新的通用功能匹配器,称为Kolmogorov-Arnold Spline Additive 模型(KASAM 模型)。SAM和KASAM的记忆保留、表达力和局限性是用分析和实验方式加以说明的。SAM和KASAM的记忆保留力强但不完善,在连续学习任务中出现重叠干扰的较小区域。KASAM对灾难性的遗忘表现出更大的脆弱性。KASAM与假的演练训练技术在回归任务和记忆保留方面表现优异。