It is widely accepted that so-called facts can be checked by searching for information on the Internet. This process requires a fact-checker to formulate a search query based on the fact and to present it to a search engine. Then, relevant and believable passages need to be identified in the search results before a decision is made. This process is carried out by sub-editors at many news and media organisations on a daily basis. Here, we ask the question as to whether it is possible to automate the first step, that of query generation. Can we automatically formulate search queries based on factual statements which are similar to those formulated by human experts? Here, we consider similarity both in terms of textual similarity and with respect to relevant documents being returned by a search engine. First, we introduce a moderate-sized evidence collection dataset which includes 390 factual statements together with associated human-generated search queries and search results. Then, we investigate generating queries using a number of rule-based and automatic text generation methods based on pre-trained large language models (LLMs). We show that these methods have different merits and propose a hybrid approach which has superior performance in practice.
翻译:人们普遍认为,所谓的事实可以通过在互联网上搜索信息来检查。这一过程需要一名事实检查员根据事实来进行搜索查询,并将查询结果提交给一个搜索引擎。然后,在作出决定之前,需要在搜索结果中找到相关和可信赖的段落。这一过程由许多新闻和媒体组织的副编辑每天进行。这里,我们问是否可以使第一步,即查询生成过程自动化。我们能否根据与人类专家提出的相似的事实陈述自动进行搜索查询?在这里,我们认为,在文本相似性方面,以及在搜索引擎归还的相关文件方面,我们考虑相似性。首先,我们引入一个中等规模的证据收集数据集,其中包括390个事实陈述以及相关的人为搜索查询和搜索结果。然后,我们用一些基于预先培训的大型语言模型(LLMS)的基于规则的自动文本生成方法进行调查。我们发现,这些方法有不同之处,并提出一种在实践上表现优异的混合方法。</s>