Question answering over knowledge bases (KBQA) has become a popular approach to help users extract information from knowledge bases. Although several systems exist, choosing one suitable for a particular application scenario is difficult. In this article, we provide a comparative study of six representative KBQA systems on eight benchmark datasets. In that, we study various question types, properties, languages, and domains to provide insights on where existing systems struggle. On top of that, we propose an advanced mapping algorithm to aid existing models in achieving superior results. Moreover, we also develop a multilingual corpus COVID-KGQA, which encourages COVID-19 research and multilingualism for the diversity of future AI. Finally, we discuss the key findings and their implications as well as performance guidelines and some future improvements. Our source code is available at \url{https://github.com/tamlhp/kbqa}.
翻译:回答知识基础(KBQA)的问题已成为帮助用户从知识基础获取信息的流行方法,虽然存在若干系统,但很难选择适合特定应用情景的系统,在本条中,我们对8个基准数据集的6个具有代表性的KBQA系统进行了比较研究,对8个基准数据集的6个具有代表性的KBQA系统进行了比较研究,研究各种问题类型、属性、语言和领域,以洞察现有系统在哪些方面挣扎。此外,我们提出了先进的绘图算法,以帮助现有模型取得优异成果。此外,我们还开发了多语种的COVID-KGQA系统,鼓励COVID-19研究,并使用多种语言促进未来AI的多样性。最后,我们讨论了关键结论及其影响,以及绩效指南和一些未来的改进。我们的源代码可在\url{https://github.com/tamlhp/kbqa}查阅。