Time is an important dimension in our physical world. Lots of facts can evolve with respect to time. For example, the U.S. President might change every four years. Therefore, it is important to consider the time dimension and empower the existing QA models to reason over time. However, the existing QA datasets contain rather few time-sensitive questions, hence not suitable for diagnosing or benchmarking the model's temporal reasoning capability. In order to promote research in this direction, we propose to construct a time-sensitive QA dataset. The dataset is constructed by 1) mining time-evolving facts from WikiData and align them to their corresponding Wikipedia page, 2) employing crowd workers to verify and calibrate these noisy facts, 3) generating question-answer pairs based on the annotated time-sensitive facts. Our dataset poses two novel challenges: 1) the model needs to understand both explicit and implicit mention of time information in the long document, 2) the model needs to perform temporal reasoning like comparison, addition, subtraction. We evaluate different SoTA long-document QA systems like BigBird and FiD on our dataset. The best-performing model FiD can only achieve 46\% accuracy, still far behind the human performance of 87\%. We demonstrate that these models are still lacking the ability to perform robust temporal understanding and reasoning. Therefore, we believe that our dataset could serve as a benchmark to empower future studies in temporal reasoning. The dataset and code are released in~\url{https://github.com/wenhuchen/Time-Sensitive-QA}.
翻译:时间是我们物理世界的一个重要维度。 许多事实都可以在时间上演变。 例如, 美国总统可能每四年改变一次。 因此, 重要的是要考虑时间维基数据, 并赋予现有的质量评估模型在一段时间里能够理解的时间维度。 然而, 现有的质量评估数据集包含的时间敏感问题很少, 因而不适于对模型的时间推理能力进行诊断或基准化。 为了促进这方面的研究, 我们提议构建一个时间敏感的QA数据集。 数据集的构建方式是:1) 从维基数据中挖掘时间变化的事实, 并把它们与相应的维基百科页面相匹配。 因此, 重要的是要考虑时间维基数据维度, 并赋予现有质量评估模型的能力。 我们的模型需要理解明确和隐含的时间信息, 2) 模型需要进行时间推理, 如比较、 补充、 减值。 我们评估不同的 SoTA 长期文件系统, 如BikiD 和 FiD 等, 来核查和校准这些噪音的事实事实, 3⁄a 显示我们数据的精确性。 我们最强的模型可以证明, 人类的精确性模型可以证明我们的数据。