This paper studies continual learning (CL) of a sequence of aspect sentiment classification(ASC) tasks in a particular CL setting called domain incremental learning (DIL). Each task is from a different domain or product. The DIL setting is particularly suited to ASC because in testing the system needs not know the task/domain to which the test data belongs. To our knowledge, this setting has not been studied before for ASC. This paper proposes a novel model called CLASSIC. The key novelty is a contrastive continual learning method that enables both knowledge transfer across tasks and knowledge distillation from old tasks to the new task, which eliminates the need for task ids in testing. Experimental results show the high effectiveness of CLASSIC.
翻译:本文研究在所谓的领域递增学习(DIL)的特定 CL 设置中持续学习情感分类(ASC) 的系列任务。 每项任务来自不同的领域或产品。 DIL 设置特别适合 ASC, 因为测试系统不需要知道测试数据所属的任务/领域。 据我们所知, 此前还没有为 ASC 研究过这一设置。 本文提出了一个名为 CLASIC 的新颖模式。 关键的新颖性是一种对比式的不断学习方法, 既可以让知识从旧任务向新任务转移, 也可以让知识从旧任务中蒸馏到新任务, 从而消除了测试中任务标识的需要。 实验结果显示CLASIC 的高度有效性 。