Relation detection plays a crucial role in Knowledge Base Question Answering (KBQA) because of the high variance of relation expression in the question. Traditional deep learning methods follow an encoding-comparing paradigm, where the question and the candidate relation are represented as vectors to compare their semantic similarity. Max- or average- pooling operation, which compresses the sequence of words into fixed-dimensional vectors, becomes the bottleneck of information. In this paper, we propose to learn attention-based word-level interactions between questions and relations to alleviate the bottleneck issue. Similar to the traditional models, the question and relation are firstly represented as sequences of vectors. Then, instead of merging the sequence into a single vector with pooling operation, soft alignments between words from the question and the relation are learned. The aligned words are subsequently compared with the convolutional neural network (CNN) and the comparison results are merged finally. Through performing the comparison on low-level representations, the attention-based word-level interaction model (ABWIM) relieves the information loss issue caused by merging the sequence into a fixed-dimensional vector before the comparison. The experimental results of relation detection on both SimpleQuestions and WebQuestions datasets show that ABWIM achieves state-of-the-art accuracy, demonstrating its effectiveness.
翻译:传统深层次的学习方法遵循一种编码比较模式,将问题和候选关系作为矢量代表,以比较其语义相似性。 将单词序列压缩成固定维向矢量的 最大或平均集合操作成为信息的瓶颈。 在本文件中,我们建议学习基于注意的字级互动,以缓解瓶颈问题。与传统模式类似,问题和关系首先作为矢量序列来代表。然后,将序列与单个矢量的集合操作合并为单一矢量,而不是将问题和候选关系之间的软一致性与关系加以比较。随后,将组合词与进化神经网络(CNN)进行对比,并将比较结果最终合并。通过对低层次的表达方式进行比较,基于注意的字级互动模式(ABWIM)通过将序列合并成一个固定维度矢量矢量的矢量作为矢量的序列,问题和关系作为矢量的序列作为矢量。然后,将问题和问题合并成一个单一矢量的矢量,从而显示其测试结果。