The table-based fact verification task has recently gained widespread attention and yet remains to be a very challenging problem. It inherently requires informative reasoning over natural language together with different numerical and logical reasoning on tables (e.g., count, superlative, comparative). Considering that, we exploit mixture-of-experts and present in this paper a new method: Self-adaptive Mixture-of-Experts Network (SaMoE). Specifically, we have developed a mixture-of-experts neural network to recognize and execute different types of reasoning -- the network is composed of multiple experts, each handling a specific part of the semantics for reasoning, whereas a management module is applied to decide the contribution of each expert network to the verification result. A self-adaptive method is developed to teach the management module combining results of different experts more efficiently without external knowledge. The experimental results illustrate that our framework achieves 85.1% accuracy on the benchmark dataset TabFact, comparable with the previous state-of-the-art models. We hope our framework can serve as a new baseline for table-based verification. Our code is available at https://github.com/THUMLP/SaMoE.
翻译:以表格为基础的事实核查任务最近得到了广泛的关注,但仍然是一个非常具有挑战性的问题。它本身要求就自然语言以及表格上不同的数字和逻辑推理(例如计数、超级、比较)进行内容丰富的推理。考虑到我们利用专家的混合,并在本文件中提出一种新的方法:自我适应混合专家网络(SaMoE)。具体地说,我们开发了一个专家混合神经网络,以承认和执行不同类型的推理 -- -- 网络由多位专家组成,每个专家处理一个特定的推理词部分,而应用一个管理模块来决定每个专家网络对核查结果的贡献。我们开发了一种自我适应方法,以便在没有外部知识的情况下更有效地教授管理模块,将不同专家的结果结合起来。实验结果表明,我们的框架在基准数据集TabFact上实现了85.1%的准确度,与以前的状态模型相比。我们希望我们的框架能够作为基于表格的核查的新基线。我们的代码可以在 https://gith/MUP/EUB.com查阅。