The task of answering a question given a text passage has shown great developments on model performance thanks to community efforts in building useful datasets. Recently, there have been doubts whether such rapid progress has been based on truly understanding language. The same question has not been asked in the table question answering (TableQA) task, where we are tasked to answer a query given a table. We show that existing efforts, of using "answers" for both evaluation and supervision for TableQA, show deteriorating performances in adversarial settings of perturbations that do not affect the answer. This insight naturally motivates to develop new models that understand question and table more precisely. For this goal, we propose Neural Operator (NeOp), a multi-layer sequential network with attention supervision to answer the query given a table. NeOp uses multiple Selective Recurrent Units (SelRUs) to further help the interpretability of the answers of the model. Experiments show that the use of operand information to train the model significantly improves the performance and interpretability of TableQA models. NeOp outperforms all the previous models by a big margin.
翻译:回答某个问题的任务在文本段落中显示,由于社区努力建立有用的数据集,模型业绩有了巨大的发展。最近,人们怀疑这种快速进展是否基于真正理解的语言。在表格答题(表QA)任务中,没有提出同样的问题,我们的任务是回答一个给定的表格。我们显示,在表格QA的评价和监督中使用“答案”的现有努力显示,在对立的扰动环境中,不会影响答案的干扰性能正在恶化。这种洞察自然地激励着开发能够更准确地理解问题和表格的新模型。为此,我们提议Neural操作员(NeOp),这是一个多层连续网络,对回答给定的查询进行关注监督。NeOp使用多个选择常规单位(SelRUs),以进一步帮助解释模型的答案。实验表明,使用操作信息来培训模型,大大改进了表QA模型的性能和可解释性能。我们建议用一个大的边距比所有以前的模型。