Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge. We believe the main reason is the lack of commonsense \mbox{connections} between concepts. To remedy this, we provide a simple and effective method that leverages external commonsense knowledge base such as ConceptNet. We pre-train direct and indirect relational functions between concepts, and show that these pre-trained functions could be easily added to existing neural network models. Results show that incorporating commonsense-based function improves the baseline on three question answering tasks that require commonsense reasoning. Further analysis shows that our system \mbox{discovers} and leverages useful evidence from an external commonsense knowledge base, which is missing in existing neural network models and help derive the correct answer.
翻译:虽然神经网络方法在各种NLP任务中取得了显著的成功,但其中许多神经网络方法在努力回答需要常识知识的问题。 我们认为,主要原因是概念之间缺乏常识\mbox{连接。 为了解决这个问题,我们提供了一种简单有效的方法,利用概念网等外部常识知识库。我们预先培训了各种概念之间的直接和间接关系功能,并表明这些预先培训的功能可以很容易地添加到现有的神经网络模型中。结果显示,基于常识的功能可以改进三个问题的基准,这三个问题需要常识理论的答案。进一步的分析表明,我们的系统\mbox{discovers] 利用了外部常识知识库的有用证据,而现有神经网络模型中缺少这些证据,并有助于得出正确的答案。