This work proposes a minimal computational model for learning a structured memory of multiple object classes in an incremental setting. Our approach is based on establishing a closed-loop transcription between multiple classes and their corresponding subspaces, known as a linear discriminative representation, in a low-dimensional feature space. Our method is both simpler and more efficient than existing approaches to incremental learning, in terms of model size, storage, and computation: it requires only a single, fixed-capacity autoencoding network with a feature space that is used for both discriminative and generative purposes. All network parameters are optimized simultaneously without architectural manipulations, by solving a constrained minimax game between the encoding and decoding maps over a single rate reduction-based objective. Experimental results show that our method can effectively alleviate catastrophic forgetting, achieving significantly better performance than prior work for both generative and discriminative purposes.
翻译:这项工作提议了一个最小的计算模型,用于在递增环境下学习多物体类的结构记忆。 我们的方法是在多类及其相应的子空间之间建立闭路抄录,在低维特征空间中,称为线性歧视表达。 我们的方法在模型大小、储存和计算方面比现有的递增学习方法更简单、效率更高:它只需要一个单一的固定能力自动编码网络,其特性空间既可用于歧视目的,也可用于基因化目的。所有网络参数都是在没有建筑操作的情况下同时优化的,方法是解决编码和解码地图之间在单一减速目标上的有限微积分游戏。 实验结果表明,我们的方法可以有效地减轻灾难性的遗忘,在基因化和区分目的上取得比先前工作更好的业绩。