The opaqueness of the multi-hop fact verification model imposes imperative requirements for explainability. One feasible way is to extract rationales, a subset of inputs, where the performance of prediction drops dramatically when being removed. Though being explainable, most rationale extraction methods for multi-hop fact verification explore the semantic information within each piece of evidence individually, while ignoring the topological information interaction among different pieces of evidence. Intuitively, a faithful rationale bears complementary information being able to extract other rationales through the multi-hop reasoning process. To tackle such disadvantages, we cast explainable multi-hop fact verification as subgraph extraction, which can be solved based on graph convolutional network (GCN) with salience-aware graph learning. In specific, GCN is utilized to incorporate the topological interaction information among multiple pieces of evidence for learning evidence representation. Meanwhile, to alleviate the influence of noisy evidence, the salience-aware graph perturbation is induced into the message passing of GCN. Moreover, the multi-task model with three diagnostic properties of rationale is elaborately designed to improve the quality of an explanation without any explicit annotations. Experimental results on the FEVEROUS benchmark show significant gains over previous state-of-the-art methods for both rationale extraction and fact verification.
翻译:多点事实核查模型的不透明性要求解释性的要求。一个可行的方法是提取理由,即一组投入,其预测的性能在删除时会急剧下降。多点事实核查的大多数理由提取方法虽然可以解释,但多点事实核查的多数理由提取方法在每项证据中单独探索语义信息,同时忽视不同证据之间的地形信息互动。从直觉看,一个忠诚的理由含有补充信息,能够通过多点逻辑推理过程提取其他理由。为了解决这些缺点,我们将多点事实的多点核实作为子绘图提取,这可以在图表革命网络(GCN)和突出的图表学习的基础上得到解决。具体地说,GCN用来将多点证据中的地形互动信息纳入到不同的证据中去,同时,突出的觉察力图触动作用被引入了GCN的传递信息。此外,多点分析原理模型有三个诊断性,其设计的目的是在不作任何明确解释的情况下改进解释质量。实验性结果显示前一点事实的实验性推理学结果,同时显示FOURAL-GIS基准的实验性推理学结果。