Stream learning refers to the ability to acquire and transfer knowledge across a continuous stream of data without forgetting and without repeated passes over the data. A common way to avoid catastrophic forgetting is to intersperse new examples with replays of old examples stored as image pixels or reproduced by generative models. Here, we considered stream learning in image classification tasks and proposed a novel hypotheses-driven Augmented Memory Network, which efficiently consolidates previous knowledge with a limited number of hypotheses in the augmented memory and replays relevant hypotheses to avoid catastrophic forgetting. The advantages of hypothesis-driven replay over image pixel replay and generative replay are two-fold. First, hypothesis-based knowledge consolidation avoids redundant information in the image pixel space and makes memory usage more efficient. Second, hypotheses in the augmented memory can be re-used for learning new tasks, improving generalization and transfer learning ability. We evaluated our method on three stream learning object recognition datasets. Our method performs comparably well or better than SOTA methods, while offering more efficient memory usage. All source code and data are publicly available https://github.com/kreimanlab/AugMem.
翻译:流体学习是指在连续不断的数据流中获取和转让知识的能力,而不会忘记,也不会重复数据传承; 避免灾难性忘却的一个常见方法是,在图像像素存储或由基因模型复制的老例子的重放中插入新实例。 在这里,我们考虑在图像分类任务中进行流学,并提议了一个新颖的假设驱动增强记忆网络,在增强记忆和重播相关假设时,以数量有限的假设有效地综合了以往的知识,以避免灾难性忘却。假设驱动重放图像像素重放和基因重放的优势是两重而有。 首先,基于假设的知识整合避免了图像像素空间中的多余信息,提高了记忆的使用效率。 其次,增强记忆中的假设可以重新用于学习新任务,改进一般化和转移学习能力。 我们在三个流学习对象识别数据集中评估了我们的方法。 我们的方法比SOTA方法要好或好得多,同时提供更有效率的记忆使用。 所有源代码和数据都公开提供 https://Membrub.A.com。