Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.
翻译:多op阅读理解侧重于一种事实问题, 系统需要适当地整合多种证据来正确解答一个问题。 先前的工作将全球证据与本地的引用信息相近, 将DAG- 以GRU 命名的 GRU 层编码为连接链, 但是, 在提供丰富推理信息方面, 多op阅读理解有限。 我们引入了一种新的方法来更好地连接全球证据, 与 DAG 相比, 这构成了更复杂的图表。 为了在图表上进行证据整合, 我们调查了两个最近的图形神经网络, 即图形革命网络和图形经常性网络。 对两个标准数据集的实验显示, 更丰富的全球信息可以带来更好的答案。 我们的方法比这些数据集上公布的结果要好。