Catastrophic forgetting in neural networks during incremental learning remains a challenging problem. Previous research investigated catastrophic forgetting in fully connected networks, with some earlier work exploring activation functions and learning algorithms. Applications of neural networks have been extended to include similarity learning. Understanding how similarity learning loss functions would be affected by catastrophic forgetting is of significant interest. Our research investigates catastrophic forgetting for four well-known similarity-based loss functions during incremental class learning. The loss functions are Angular, Contrastive, Center, and Triplet loss. Our results show that the catastrophic forgetting rate differs across loss functions on multiple datasets. The Angular loss was least affected, followed by Contrastive, Triplet loss, and Center loss with good mining techniques. We implemented three existing incremental learning techniques, iCaRL, EWC, and EBLL. We further proposed a novel technique using Variational Autoencoders (VAEs) to generate representation as exemplars passed through the network's intermediate layers. Our method outperformed three existing state-of-the-art techniques. We show that one does not require stored images (exemplars) for incremental learning with similarity learning. The generated representations from VAEs help preserve regions of the embedding space used by prior knowledge so that new knowledge does not ``overwrite'' it.
翻译:在渐进学习期间,神经网络中的灾难性遗忘是一个棘手的问题。 先前的研究调查了在完全连接的网络中发生的灾难性遗忘, 早期的一些研究探索了激活功能和学习算法。 神经网络的应用已经扩展, 包括了类似学习学习。 了解类似学习损失功能会受到灾难性遗忘的影响是一件很有意义的事。 我们的研究调查了在递增班级学习期间,以灾难性的方式忘记四个众所周知的类似损失功能。 损失功能是角形、 对立、 中心 和 Triplet 损失。 我们的结果显示, 灾难性遗忘率在多个数据集中的不同损失函数中有所不同。 角损失影响最小, 其次是对比、 特里特 损失 和 中心损失 。 我们应用三种现有的递增学习技术, iCaRL、 EWC 和 EBLLL。 我们还提议了一种新颖技术, 使用 Variation 自动电算器( VAEEVE) 生成的缩略图, 通过网络的中间层传递。 我们发现, 我们的方法超越了三种现有状态技术的状态。 我们发现, 不需要存储图像, 来保存先前学习的图像。</s>