Table-based reasoning has shown remarkable progress in combining deep models with discrete reasoning, which requires reasoning over both free-form natural language (NL) questions and structured tabular data. However, previous table-based reasoning solutions usually suffer from significant performance degradation on huge evidence (tables). In addition, most existing methods struggle to reason over complex questions since the required information is scattered in different places. To alleviate the above challenges, we exploit large language models (LLMs) as decomposers for effective table-based reasoning, which (i) decompose huge evidence (a huge table) into sub-evidence (a small table) to mitigate the interference of useless information for table reasoning; and (ii) decompose complex questions into simpler sub-questions for text reasoning. Specifically, we first use the LLMs to break down the evidence (tables) involved in the current question, retaining the relevant evidence and excluding the remaining irrelevant evidence from the huge table. In addition, we propose a "parsing-execution-filling" strategy to alleviate the hallucination dilemma of the chain of thought by decoupling logic and numerical computation in each step. Extensive experiments show that our method can effectively leverage decomposed evidence and questions and outperforms the strong baselines on TabFact, WikiTableQuestion, and FetaQA datasets. Notably, our model outperforms human performance for the first time on the TabFact dataset.
翻译:基于表格的推理已经在将深度模型与离散推理相结合方面取得了显著进展,这需要对自由形式的自然语言(NL)问题和结构化表格数据进行推理。然而,以往的表格推理解决方案通常在大量证据(表格)上遇到显着的性能下降。此外,大多数现有的方法在处理复杂问题时遇到困难,因为所需信息分散在不同的地方。为了缓解以上挑战,我们利用大型语言模型(LLMs)作为有效的表格推理分解器,可以(i)将大量证据(大型表格)分解成子证据(小型表格),以减轻表格推理中无用信息的干扰;和(ii)将复杂问题分解成更简单的子问题进行文本推理。具体来说,我们首先使用LLMs来分解涉及当前问题的证据(表格),保留相关证据并从大表格中排除其他无关证据。此外,我们提出了一种“解析-执行-填充”策略,通过在每个步骤中解耦逻辑和数值计算,缓解思维链的幻觉困境。广泛的实验表明,我们的方法可以有效地利用分解的证据和问题,在TabFact,WikiTableQuestion和FetaQA数据集上优于强基线。值得注意的是,我们的模型首次在TabFact数据集上超过了人类表现。