Weakly supervised named entity recognition methods train label models to aggregate the token annotations of multiple noisy labeling functions (LFs) without seeing any manually annotated labels. To work well, the label model needs to contextually identify and emphasize well-performed LFs while down-weighting the under-performers. However, evaluating the LFs is challenging due to the lack of ground truths. To address this issue, we propose the sparse conditional hidden Markov model (Sparse-CHMM). Instead of predicting the entire emission matrix as other HMM-based methods, Sparse-CHMM focuses on estimating its diagonal elements, which are considered as the reliability scores of the LFs. The sparse scores are then expanded to the full-fledged emission matrix with pre-defined expansion functions. We also augment the emission with weighted XOR scores, which track the probabilities of an LF observing incorrect entities. Sparse-CHMM is optimized through unsupervised learning with a three-stage training pipeline that reduces the training difficulty and prevents the model from falling into local optima. Compared with the baselines in the Wrench benchmark, Sparse-CHMM achieves a 3.01 average F1 score improvement on five comprehensive datasets. Experiments show that each component of Sparse-CHMM is effective, and the estimated LF reliabilities strongly correlate with true LF F1 scores.
翻译:监管不力的、受监管不力的命名实体识别方法培训标签模型,以汇总多个噪音标签功能(LFs)的象征性说明,而无需看到任何手工加注标签。为了工作,标签模型需要根据具体情况识别和强调表现良好的LFs,同时对业绩不佳者进行下加权。然而,由于缺乏地面真相,评估LFs具有挑战性。为解决这一问题,我们提议了稀疏的有条件的隐蔽Markov模型(Sparse-CHMMM),而不是作为基于HMM的其他方法预测整个排放矩阵。Sprassy-CHMM侧重于估计其成形元素,这些成形元素被视为LFs的可靠性分数。然后将稀少的分数扩大至具有预先界定扩展功能的完整排放矩阵。我们还以加权XOR分数增加排放量,以跟踪观察不正确的LF实体的概率。Spress-CHMMM模式通过不超强的学习,通过三阶段培训管道优化优化,减少培训难度,防止模型跌入本地选入。 将稀少分数的分数扩展到与WMFS-RS-S-SBS-SBSBS-CS-S-CS-Sirxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx的每xxx的精确的每10x的精确的精确的底的精确的底的精确度基准进行精确的精确的精确的精确的精确度基准进行大幅比较。