Argument mining tasks require an informed range of low to high complexity linguistic phenomena and commonsense knowledge. Previous work has shown that pre-trained language models are highly effective at encoding syntactic and semantic linguistic phenomena when applied with transfer learning techniques and built on different pre-training objectives. It remains an issue of how much the existing pre-trained language models encompass the complexity of argument mining tasks. We rely on experimentation to shed light on how language models obtained from different lexical semantic families leverage the performance of the identification of argumentative discourse units task. Experimental results show that transfer learning techniques are beneficial to the task and that current methods may be insufficient to leverage commonsense knowledge from different lexical semantic families.
翻译:以往的工作表明,预先培训的语言模式在将综合语言和语义语言现象与转让学习技术一起应用并基于不同的培训前目标时,在编码方面非常有效,这仍然是一个问题,即现有预先培训的语言模式在多大程度上包含了辩论采矿任务的复杂性。我们依靠实验来说明从不同词汇语义家庭获得的语言模式如何利用辨识辩论单元任务的工作。实验结果表明,转让学习技术对这项任务有益,而目前的方法可能不足以利用不同词汇语义家庭的普通知识。