Prototype, as a representation of class embeddings, has been explored to reduce memory footprint or mitigate forgetting for continual learning scenarios. However, prototype-based methods still suffer from abrupt performance deterioration due to semantic drift and prototype interference. In this study, we propose Contrastive Prototypical Prompt (CPP) and show that task-specific prompt-tuning, when optimized over a contrastive learning objective, can effectively address both obstacles and significantly improve the potency of prototypes. Our experiments demonstrate that CPP excels in four challenging class-incremental learning benchmarks, resulting in 4% to 6% absolute improvements over state-of-the-art methods. Moreover, CPP does not require a rehearsal buffer and it largely bridges the performance gap between continual learning and offline joint-learning, showcasing a promising design scheme for continual learning systems under a Transformer architecture.
翻译:原型是班级嵌入器的一种代表,已经探索了原型,以减少记忆足迹,或减轻对持续学习情景的忘却。然而,原型方法仍然由于语义漂移和原型干扰而出现突然性能恶化。在本研究中,我们提出了反动性原型快速(CPP)方案,并表明任务特有的即时调,如果优化于对比式学习目标,就能有效解决两个障碍,并显著提高原型的效能。我们的实验表明,CPP在四个具有挑战性的班级强化学习基准方面十分出色,导致比最先进方法有4%至6%的绝对改进。 此外,CPP不需要排练缓冲,它在很大程度上弥合了持续学习与离线联合学习之间的业绩差距,展示了在变异结构下持续学习系统的有希望的设计计划。</s>