Weak supervision has shown promising results in many natural language processing tasks, such as Named Entity Recognition (NER). Existing work mainly focuses on learning deep NER models only with weak supervision, i.e., without any human annotation, and shows that by merely using weakly labeled data, one can achieve good performance, though still underperforms fully supervised NER with manually/strongly labeled data. In this paper, we consider a more practical scenario, where we have both a small amount of strongly labeled data and a large amount of weakly labeled data. Unfortunately, we observe that weakly labeled data does not necessarily improve, or even deteriorate the model performance (due to the extensive noise in the weak labels) when we train deep NER models over a simple or weighted combination of the strongly labeled and weakly labeled data. To address this issue, we propose a new multi-stage computational framework -- NEEDLE with three essential ingredients: (1) weak label completion, (2) noise-aware loss function, and (3) final fine-tuning over the strongly labeled data. Through experiments on E-commerce query NER and Biomedical NER, we demonstrate that NEEDLE can effectively suppress the noise of the weak labels and outperforms existing methods. In particular, we achieve new SOTA F1-scores on 3 Biomedical NER datasets: BC5CDR-chem 93.74, BC5CDR-disease 90.69, NCBI-disease 92.28.
翻译:在很多自然语言处理任务中,例如名称实体识别(NER),监督薄弱显示有希望的结果。现有工作主要侧重于学习深度净入学率模型,但监管薄弱,即没有人注解。 现有工作主要侧重于学习深度净入学率模型,但仅使用标签薄弱的数据,就能够取得良好的绩效,尽管仍低于完全监督的NER,使用人工/加固标签的数据。在本文中,我们考虑一种更实际的设想,即我们既有少量贴有强烈标签的数据,也有大量标签薄弱的数据。 不幸的是,我们发现标签薄弱的数据不一定在监管薄弱的情况下改进,甚至恶化模型性能(由于标签薄弱时的广泛噪音),而我们只是通过使用贴有严格标签和标签薄弱的数据的简单或加权组合来培训深度净入学率模型。为了解决这一问题,我们提出了一个新的多阶段计算框架 -- -- 具有三个基本要素的NEGELE:(1) 标签完成薄弱,(2) 噪音损失功能,以及(3) 严格标签不准确的数据的最后微调。我们通过对ERNER和生物医学5NUR1号标签的微弱的BC-BCRMM5,我们能够有效地完成现有的BIC-C-BC-BC-RISDRM的新的数据。