In this paper, we investigate the problem of reasoning over natural language statements. Prior neural based approaches do not explicitly consider the inter-dependency among answers and their proofs. In this paper, we propose PRobr, a novel approach for joint answer prediction and proof generation. PRobr defines a joint probabilistic distribution over all possible proof graphs and answers via an induced graphical model. We then optimize the model using variational approximation on top of neural textual representation. Experiments on multiple datasets under diverse settings (fully supervised, few-shot and zero-shot evaluation) verify the effectiveness of PRobr, e.g., achieving 10%-30% improvement on QA accuracy in few/zero-shot evaluation. Our codes and models can be found at https://github.com/changzhisun/PRobr/.
翻译:在本文中,我们调查自然语言语句的推理问题。 以神经为基础的先行方法没有明确考虑答案及其证据之间的相互依存关系。 在本文中,我们提议采用Probr,这是联合回答预测和证据生成的一种新颖方法。 Probr 定义了对所有可能的校对图表和答案的联合概率分布,通过一个导引图形模型。 然后我们优化模型,在神经文字代表上方使用变相近似值。 在多种环境(完全受监督、少发和零发评估)下对多个数据集的实验,验证了Probr 的有效性,例如,在几次/零发评估中,在QA精确度上实现了10%-30%的改进。我们的代码和模型可以在https://github.com/changzhisun/PRobr/上找到。