Wikipedia is the largest online encyclopedia, used by algorithms and web users as a central hub of reliable information on the web. The quality and reliability of Wikipedia content is maintained by a community of volunteer editors. Machine learning and information retrieval algorithms could help scale up editors' manual efforts around Wikipedia content reliability. However, there is a lack of large-scale data to support the development of such research. To fill this gap, in this paper, we propose Wiki-Reliability, the first dataset of English Wikipedia articles annotated with a wide set of content reliability issues. To build this dataset, we rely on Wikipedia "templates". Templates are tags used by expert Wikipedia editors to indicate content issues, such as the presence of "non-neutral point of view" or "contradictory articles", and serve as a strong signal for detecting reliability issues in a revision. We select the 10 most popular reliability-related templates on Wikipedia, and propose an effective method to label almost 1M samples of Wikipedia article revisions as positive or negative with respect to each template. Each positive/negative example in the dataset comes with the full article text and 20 features from the revision's metadata. We provide an overview of the possible downstream tasks enabled by such data, and show that Wiki-Reliability can be used to train large-scale models for content reliability prediction. We release all data and code for public use.
翻译:维基百科是最大的在线百科全书,由算法和网络用户用作网上可靠信息的中央枢纽。维基百科内容的质量和可靠性由志愿编辑团体维持。机器学习和信息检索算法可以帮助扩大编辑围绕维基百科内容可靠性的手工工作。然而,缺乏大规模数据支持这种研究的发展。为了填补这一空白,我们在本文件中提议维基百科文章的第一个数据集,该数据集的附加说明内容可靠性问题范围很广。要建立这一数据集,我们依赖维基百科“模板” 。模板是维基百科编辑专家用来标注内容问题的标记,例如“非中性观点”或“连线文章”的存在,以及作为在修订中发现可靠性问题的强烈信号。我们选择维基百科最受欢迎的10个与可靠性有关的模板,并提议一个有效的方法,将每个模板的维基百科文章修订中几乎1M样本贴上肯定或否定的标签。每个正面/反面例子都由维基百科编辑专家用来标注内容的完整文本和20个版本的版本,我们能够提供数据安全性版本。