We propose a combined three pre-trained language models (XLM-R, BART, and DeBERTa-V3) as an empower of contextualized embedding for named entity recognition. Our model achieves a 92.9% F1 score on the test set and ranks 5th on the leaderboard at NL4Opt competition subtask 1.
翻译:我们建议合并三种经过培训的语言模式(XLM-R、BART和DBERTA-V3),作为用于确定名称实体识别的背景化嵌入功能。 我们的模式在测试组中获得了92.9%的F1分,并在NL4Opt竞争分数1的领先板上排名第五。