Complex knowledge base question answering can be achieved by converting questions into sequences of predefined actions. However, there is a significant semantic and structural gap between natural language and action sequences, which makes this conversion difficult. In this paper, we introduce an alignment-enhanced complex question answering framework, called ALCQA, which mitigates this gap through question-to-action alignment and question-to-question alignment. We train a question rewriting model to align the question and each action, and utilize a pretrained language model to implicitly align the question and KG artifacts. Moreover, considering that similar questions correspond to similar action sequences, we retrieve top-k similar question-answer pairs at the inference stage through question-to-question alignment and propose a novel reward-guided action sequence selection strategy to select from candidate action sequences. We conduct experiments on CQA and WQSP datasets, and the results show that our approach outperforms state-of-the-art methods and obtains a 9.88\% improvements in the F1 metric on CQA dataset. Our source code is available at https://github.com/TTTTTTTTy/ALCQA.
翻译:复杂的知识基础问题解答可以通过将问题转换成预先确定的行动序列来实现。然而,自然语言与行动序列之间存在着巨大的语义和结构差距,这使得这种转换变得困难。在本文件中,我们引入了一种统一强化的复杂问答框架,称为ALCQA,它通过问题对行动协调和问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对问题对准来缩小这一差距。我们用一个问题重写模型来调整问题和每项行动,并使用预先培训的语言模型来暗中将问题与KG工艺制品对齐。此外,考虑到类似的问题与类似的行动序列相对应,我们通过问题对问题对问题对准,在推断阶段检索最高级的类似问答对等,并提出了一个新颖的、有奖益指导的行动序列选择战略,以便从候选行动序列中选择。我们在CQA和WQSP数据集上进行实验,结果显示我们的方法不符合最新方法,并在CQA数据集的F1中获得了9.88 ⁇ 改进。我们的源代码可在 ATTTTT.我们http://gy/TTTTTTTTTTTTT.我们的源代码。