Accurate parsing of citation reference strings is crucial to automatically construct scholarly databases such as Google Scholar or Semantic Scholar. Citation field extraction (CFE) is precisely this task---given a reference label which tokens refer to the authors, venue, title, editor, journal, pages, etc. Most methods for CFE are supervised and rely on training from labeled datasets that are quite small compared to the great variety of reference formats. BibTeX, the widely used reference management tool, provides a natural method to automatically generate and label training data for CFE. In this paper, we describe a technique for using BibTeX to generate, automatically, a large-scale 41M labeled strings), labeled dataset, that is four orders of magnitude larger than the current largest CFE dataset, namely the UMass Citation Field Extraction dataset [Anzaroot and McCallum, 2013]. We experimentally demonstrate how our dataset can be used to improve the performance of the UMass CFE using a RoBERTa-based [Liu et al., 2019] model. In comparison to previous SoTA, we achieve a 24.48% relative error reduction, achieving span level F1-scores of 96.3%.
翻译:引用引用参考字符串的精确分析对于自动建立谷歌学者或语义学者等学术数据库至关重要。 引用现场提取( CFE) 恰恰是一个任务 - 赋予一个符号指作者、 地点、 标题、 编辑、 日记、 页面等的参考标签。 大部分CFE 方法都受到监管,并依靠标签数据集的培训,这些数据集与各种参考格式相比是很小的。 广泛使用的参考管理工具BibTeX为自动生成和标签CFE培训数据提供了一个自然方法。 在本文中,我们描述了使用 BibTeX 自动生成一个大型 41M 标签字符的技术。 标签数据集比目前最大的 CFE数据集大四级,即UMass Friction 数据集[Anzaroot 和 McCallum, 2013]。 我们实验性地展示了如何使用我们的数据集来改进UMass CFE的性能, 使用一个基于 RoBERTA [ Lu 和 al., 2019) 的模型。 我们对比了96- FBE3 的相对比例, 我们实现了24. 。