Humans and animals have the ability to continuously learn new information over their lifetime without losing previously acquired knowledge. However, artificial neural networks struggle with this due to new information conflicting with old knowledge, resulting in catastrophic forgetting. The complementary learning systems (CLS) theory suggests that the interplay between hippocampus and neocortex systems enables long-term and efficient learning in the mammalian brain, with memory replay facilitating the interaction between these two systems to reduce forgetting. The proposed Lifelong Self-Supervised Domain Adaptation (LLEDA) framework draws inspiration from the CLS theory and mimics the interaction between two networks: a DA network inspired by the hippocampus that quickly adjusts to changes in data distribution and an SSL network inspired by the neocortex that gradually learns domain-agnostic general representations. LLEDA's latent replay technique facilitates communication between these two networks by reactivating and replaying the past memory latent representations to stabilise long-term generalisation and retention without interfering with the previously learned information. Extensive experiments demonstrate that the proposed method outperforms several other methods resulting in a long-term adaptation while being less prone to catastrophic forgetting when transferred to new domains.
翻译:人类和动物有能力在其一生中不断学习新信息而不丧失先前获得的知识。 但是,人工神经网络由于新信息与旧知识相冲突而为此挣扎,导致灾难性的遗忘。 补充学习系统(CLS)的理论表明,河马坎普斯和新皮层动物系统之间的相互作用使得哺乳动物大脑能够进行长期和有效的学习,而记忆回放有助于这两个系统之间的互动,以减少遗忘。 拟议的终身自闭自闭适应(LLLLEDA)框架从CLS理论中汲取灵感,并模仿两个网络之间的互动:受河马坎普斯启发的DA网络,快速适应数据分配的变化,以及受新皮层系统启发的SSLLED网络,逐渐学会域分辨出域等。 LLEDA的潜在回放技术有助于这两个网络之间的沟通,通过重新激活和重放过去记忆潜伏表来稳定长期的概括和保留,而不会干扰先前学到的信息。 广泛的实验表明,拟议的方法超越了其他方法,在长期适应后,会逐渐忘记灾难性的域。