Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning. A desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises. However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on the partial subgraphs, which increases the reasoning bias when the intermediate supervision is missing. This paper proposes a trainable subgraph retriever (SR) decoupled from the subsequent reasoning process, which enables a plug-and-play framework to enhance any subgraph-oriented KBQA model. Extensive experiments demonstrate SR achieves significantly better retrieval and QA performance than existing retrieval methods. Via weakly supervised pre-training as well as the end-to-end fine-tuning, SRl achieves new state-of-the-art performance when combined with NSM, a subgraph-oriented reasoner, for embedding-based KBQA methods.
翻译:最近关于知识基础问题解答(KBQA)的著作为较容易推理而检索子图。 想要的子图至关重要, 因为一个小的子图可能会排除答案, 但大的子图可能会引入更多的噪音。 但是,现有的检索要么是粗略的,要么与推理相互交织,导致部分子图的推理,这增加了在中间监督缺失时的推理偏向。 本文建议从随后的推理过程中分离出一个可训练的子图检索器(SR), 使插图和功能框架能够增强任何以子图为导向的 KBQA 模型。 广泛的实验表明SR比现有的检索方法要好得多的检索和QA 性能。 在与以子图为导向的KBQA 方法结合时, SRL 取得了新的状态性能。