Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge. We believe the main reason is the lack of commonsense connections between concepts. To remedy this, we provide a simple and effective method that leverages external commonsense knowledge base such as ConceptNet. We pre-train direct and indirect relational functions between concepts, and show that these pre-trained functions could be easily added to existing neural network models. Results show that incorporating commonsense-based function improves the state-of-the-art on two question answering tasks that require commonsense reasoning. Further analysis shows that our system discovers and leverages useful evidences from an external commonsense knowledge base, which is missing in existing neural network models and help derive the correct answer.
翻译:虽然神经网络方法在各种NLP任务中取得了显著成功,但其中许多是难以回答需要常识知识的问题。我们认为,主要原因是概念之间缺乏常识联系。为了纠正这一点,我们提供了一种简单有效的方法,利用概念网等外部常识知识库。我们对概念之间的直接和间接关系功能进行了预先培训,并表明这些预先培训的功能可以很容易地添加到现有的神经网络模型中。结果显示,在两个需要常识理论解释的问题上,采用常识功能可以改进最先进的技术。进一步的分析表明,我们的系统发现并利用外部常识知识库提供的有用证据,这些证据在现有的神经网络模型中缺失,有助于得出正确的答案。