Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life. Taking the advantages of interpretable machine learning (interpretable ML), this work proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings. We also introduce a new method called Critical Freezing based on the observations of our tool. Experiments on ResNet articulate how catastrophic forgetting happens, particularly showing which components of this famous network are forgetting. Our new continual learning algorithm defeats various recent techniques by a significant margin, proving the capability of the investigation. Critical freezing not only attacks catastrophic forgetting but also exposes explainability.
翻译:解释深层神经网络(通常被视为黑盒)的行为至关重要,特别是当它们现在被人类生活的不同方面所采纳时。利用可解释的机器学习(可解释的 ML)的好处,这项工作提出了一个名为“灾难遗忘分化(或CFD)”的新工具,用以解释持续学习环境中的灾难性遗忘。我们还根据对工具的观察,引入了一种名为“关键冻结”的新方法。ResNet实验说明了灾难性的遗忘是如何发生的,特别是显示这个著名网络的哪些组成部分正在被遗忘。我们新的持续学习算法大大地挫败了最近的各种技术,证明了调查的能力。关键冻结不仅打击灾难性的遗忘,而且还暴露了解释性。