Since open social platforms allow for a large and continuous flow of unverified information, rumors can emerge unexpectedly and spread quickly. However, existing rumor detection (RD) models often assume the same training and testing distributions and can not cope with the continuously changing social network environment. This paper proposed a Continual Prompt-Tuning RD (CPT-RD) framework, which avoids catastrophic forgetting (CF) of upstream tasks during sequential task learning and enables bidirectional knowledge transfer between domain tasks. Specifically, we propose the following strategies: (a) Our design explicitly decouples shared and domain-specific knowledge, thus reducing the interference among different domains during optimization; (b) Several technologies aim to transfer knowledge of upstream tasks to deal with emergencies; (c) A task-conditioned prompt-wise hypernetwork (TPHNet) is used to consolidate past domains. In addition, CPT-RD avoids CF without the necessity of a rehearsal buffer.
翻译:由于开放的社会平台允许大量和连续不断的未经核实的信息流动,传闻可能会出人意料地迅速传播,然而,现有的传闻检测模式往往承担同样的培训和测试分布,无法应对不断变化的社会网络环境,本文件提议了一个持续快速引导RD(CPT-RD)框架,避免在连续任务学习期间灾难性地忘记上游任务,并使得在域任务之间可以双向转移知识。具体地说,我们提出以下战略:(a) 我们的设计明确分离共享和特定领域知识,从而减少不同领域在优化过程中的干扰;(b) 几项技术旨在转让上游任务知识,以应对紧急情况;(c) 利用一个有任务条件的快速超网络(TPHNet)来巩固过去的领域。此外,CPT-RD在不需要彩排缓冲的情况下避免CF。