Entailment trees have been proposed to simulate the human reasoning process of explanation generation in the context of open--domain textual question answering. However, in practice, manually constructing these explanation trees proves a laborious process that requires active human involvement. Given the complexity of capturing the line of reasoning from question to the answer or from claim to premises, the issue arises of how to assist the user in efficiently constructing multi--level entailment trees given a large set of available facts. In this paper, we frame the construction of entailment trees as a sequence of active premise selection steps, i.e., for each intermediate node in an explanation tree, the expert needs to annotate positive and negative examples of premise facts from a large candidate list. We then iteratively fine--tune pre--trained Transformer models with the resulting positive and tightly controlled negative samples and aim to balance the encoding of semantic relationships and explanatory entailment relationships. Experimental evaluation confirms the measurable efficiency gains of the proposed active fine--tuning method in facilitating entailment trees construction: up to 20\% improvement in explanatory premise selection when compared against several alternatives.
翻译:在开放式文本问题解答的背景下,提议了精细树以模拟人类推理过程来解释产生的原因,但在实践中,人工建造这些解释树证明是一个需要人类积极参与的艰苦过程。鉴于从问题到答案或从索赔到房地的推理过程的复杂性,问题在于如何协助用户在大量现有事实中高效地建造多层要求树木。在本文件中,我们将要求建造树作为积极前提选择步骤的顺序,即解释树中每个中间节点,专家需要从大型候选名单中说明正面和负面的前提事实实例。然后,我们反复地微调预先训练过的变形模型,其结果为积极和严格控制的负样,目的是平衡语调关系和解释性要求关系。实验性评估证实了拟议的积极微调方法在便利树木的构造方面可以衡量的效率收益:与几种替代方法相比,在解释性前提选择方面有20- ⁇ 的改进。