We describe a new dataset of software mentions in biomedical papers. Plain-text software mentions are extracted with a trained SciBERT model from several sources: the NIH PubMed Central collection and from papers provided by various publishers to the Chan Zuckerberg Initiative. The dataset provides sources, context and metadata, and, for a number of mentions, the disambiguated software entities and links. We extract 1.12 million unique string software mentions from 2.4 million papers in the NIH PMC-OA Commercial subset, 481k unique mentions from the NIH PMC-OA Non-Commercial subset (both gathered in October 2021) and 934k unique mentions from 3 million papers in the Publishers' collection. There is variation in how software is mentioned in papers and extracted by the NER algorithm. We propose a clustering-based disambiguation algorithm to map plain-text software mentions into distinct software entities and apply it on the NIH PubMed Central Commercial collection. Through this methodology, we disambiguate 1.12 million unique strings extracted by the NER model into 97600 unique software entities, covering 78% of all software-paper links. We link 185000 of the mentions to a repository, covering about 55% of all software-paper links. We describe in detail the process of building the datasets, disambiguating and linking the software mentions, as well as opportunities and challenges that come with a dataset of this size. We make all data and code publicly available as a new resource to help assess the impact of software (in particular scientific open source projects) on science.
翻译:我们描述的是生物医学论文中提及的软件的新数据集。 平文本软件引用了来自以下几个来源的经过培训的 SciBERT 模型: NIH PubMed Central 集和各出版商向Chan Zuckerberg 倡议提供的论文。 数据集提供了源、 上下文和元数据, 以及一些隐含的软件实体和链接。 我们从NIH PMC-OA 商业子集中的240万篇论文中提取了112万个独特的字符串。 481k 独有的引用来自NIH PMC-OA Non-Commercial子集( 两者均于2021年10月收集) 和934k 独有的引用来自出版商收藏的300万份论文。 在纸张中和NER 算法中如何引用软件, 提供了各种源、 背景、 直线、 直线和中央商业收藏。 我们用NER 模型所提取的112万个独有的字符串, 包括了所有软体的软件的78%的大小。 我们用软件链接, 将所有数据库链接连接到185 。