This paper summarizes our participation in the SMART Task of the ISWC 2020 Challenge. A particular question we are interested in answering is how well neural methods, and specifically transformer models, such as BERT, perform on the answer type prediction task compared to traditional approaches. Our main finding is that coarse-grained answer types can be identified effectively with standard text classification methods, with over 95% accuracy, and BERT can bring only marginal improvements. For fine-grained type detection, on the other hand, BERT clearly outperforms previous retrieval-based approaches.
翻译:本文件总结了我们参与SISWC2020挑战SMART任务的情况。我们感兴趣的一个具体问题是神经方法,特别是变压器模型,如BERT,与传统方法相比,在回答型预测任务上的表现如何。我们的主要结论是粗糙的回答类型可以与标准文本分类方法有效识别,精确率超过95%,而BERT只能带来微小的改进。另一方面,对于精细的型号检测,BERT显然比以前基于检索的方法效果更好。