The two main impediments to continual learning are catastrophic forgetting and memory limitations on the storage of data. To cope with these challenges, we propose a novel, cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images. During training on a new task, reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting. The loss function for the reconstructed images is weighted to reduce its effect during classifier training to cope with image degradation. When the system runs out of memory the encoded episodes are converted into centroids and covariance matrices, which are used to generate pseudo-images during classifier training, keeping classifier performance stable while using less memory. Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.
翻译:持续学习的两个主要障碍是灾难性的遗忘和对数据存储的记忆限制。 为了应对这些挑战, 我们提议了一种创新的、 认知激励型的方法, 将神经风格自动校准器转换成编码和存储图像。 在新任务的培训中, 重播了由编码片段重建的图像, 以避免灾难性的遗忘。 重建后的图像的损失功能在分类培训中被加权以减少其影响, 以应对图像退化。 当系统内存耗尽时, 编码片段被转换为中子和共变体矩阵, 用于在分类培训期间生成假像, 保持分类器性能稳定, 同时减少记忆。 我们的方法提高了分类精确度, 比标准数据集的先进方法高出137%, 同时减少了78%的存储空间。