Continual relation extraction (CRE) requires the model to continually learn new relations from class-incremental data streams. In this paper, we propose a Frustratingly easy but Effective Approach (FEA) method with two learning stages for CRE: 1) Fast Adaption (FA) warms up the model with only new data. 2) Balanced Tuning (BT) finetunes the model on the balanced memory data. Despite its simplicity, FEA achieves comparable (on TACRED or superior (on FewRel) performance compared with the state-of-the-art baselines. With careful examinations, we find that the data imbalance between new and old relations leads to a skewed decision boundary in the head classifiers over the pretrained encoders, thus hurting the overall performance. In FEA, the FA stage unleashes the potential of memory data for the subsequent finetuning, while the BT stage helps establish a more balanced decision boundary. With a unified view, we find that two strong CRE baselines can be subsumed into the proposed training pipeline. The success of FEA also provides actionable insights and suggestions for future model designing in CRE.
翻译:持续关系提取( CRE) 要求该模型不断从级级增长数据流中学习新关系。 在本文中,我们建议采用一个简单而有效的方法,以两个学习阶段为CRE:(1) 快速适应(FA)使模型变暖,只有新的数据。(2) 平衡调试(BT) 微调平衡记忆数据模型。尽管该模型简单,但FEA还是实现了(在TACRED上或(在很少Rel上)与最先进的基线相比的)可比较性能。经过仔细的检查,我们发现,新和旧关系之间的数据不平衡导致对预先训练的编码器头分类器进行扭曲的决定界限,从而损害整个性能。在FEA, 平衡调试(BT) 将记忆数据潜力用于随后的微调,而BT阶段有助于建立更平衡的决定界限。我们发现,两个强大的CRE基准可以并入拟议的培训管道。 FEA的成功也为CRE的未来设计模型提供了可操作的洞察力和建议。