In this paper, we examine several methods of acquiring Czech data for automated fact-checking, which is a task commonly modeled as a classification of textual claim veracity w.r.t. a corpus of trusted ground truths. We attempt to collect sets of data in form of a factual claim, evidence within the ground truth corpus, and its veracity label (supported, refuted or not enough info). As a first attempt, we generate a Czech version of the large-scale FEVER dataset built on top of Wikipedia corpus. We take a hybrid approach of machine translation and document alignment; the approach and the tools we provide can be easily applied to other languages. We discuss its weaknesses and inaccuracies, propose a future approach for their cleaning and publish the 127k resulting translations, as well as a version of such dataset reliably applicable for the Natural Language Inference task - the CsFEVER-NLI. Furthermore, we collect a novel dataset of 3,097 claims, which is annotated using the corpus of 2.2M articles of Czech News Agency. We present its extended annotation methodology based on the FEVER approach, and, as the underlying corpus is kept a trade secret, we also publish a standalone version of the dataset for the task of Natural Language Inference we call CTKFactsNLI. We analyze both acquired datasets for spurious cues - annotation patterns leading to model overfitting. CTKFacts is further examined for inter-annotator agreement, thoroughly cleaned, and a typology of common annotator errors is extracted. Finally, we provide baseline models for all stages of the fact-checking pipeline and publish the NLI datasets, as well as our annotation platform and other experimental data.
翻译:在本文中,我们检查了获取捷克数据用于自动化事实检查的几种方法,这种任务通常以文字主张真实性(w.r.t.t.)的分类为样板,是可信的地面事实真相。我们试图以事实主张、地面真相档案内的证据及其真实性标签(支持、反驳或没有足够的信息)的形式收集数据集。我们首先试图生成一个捷克版本的大型FEWER数据集,该数据集建在维基百科版的顶端。我们采用了机器翻译和文件校正的混合方法;我们提供的方法和工具可以很容易地应用于其他语言。我们讨论其缺陷和不准确性,提出未来清理方法,并公布由此产生的127k翻译,以及可靠地适用于自然语言推断任务(CsFEVER-NLI)的版本。此外,我们收集了3 097个新数据集,这是用捷克新闻机构所有2.20M文章的文集来附加说明的。我们根据FEVER方法提出其延伸的注释方法,我们用NEWERNCON处理办法和不准确性模型来进行清理,而我们又用原始KLICLILA的精确性数据,我们又保留了一种常规数据,我们用来作为常规数据,我们用来解读的版本,我们用来解读的精确数据。