We introduce a novel continual learning method based on multifidelity deep neural networks. This method learns the correlation between the output of previously trained models and the desired output of the model on the current training dataset, limiting catastrophic forgetting. On its own the multifidelity continual learning method shows robust results that limit forgetting across several datasets. Additionally, we show that the multifidelity method can be combined with existing continual learning methods, including replay and memory aware synapses, to further limit catastrophic forgetting. The proposed continual learning method is especially suited for physical problems where the data satisfy the same physical laws on each domain, or for physics-informed neural networks, because in these cases we expect there to be a strong correlation between the output of the previous model and the model on the current training domain.
翻译:我们提出了一种基于多保真度深度神经网络的新型连续学习方法。该方法学习了先前训练模型输出与当前训练数据集上的期望输出之间的关联,从而限制了灾难性遗忘。仅使用多保真度连续学习方法,在多个数据集上都表现出了鲁棒的结果,可以限制灾难性遗忘。此外,我们展示了多种不同连续学习方法之间的组合,包括回放和内存感知突触,以进一步防止灾难性遗忘。所提出的连续学习方法特别适用于物理问题,其中数据在每个域内都满足相同的物理定律,或者适用于基于物理的神经网络,因为在这些情况下,我们期望先前模型的输出与当前训练域上的模型存在强相关性。