Recently, pretrained language models (e.g., BERT) have achieved great success on many downstream natural language understanding tasks and exhibit a certain level of commonsense reasoning ability. However, their performance on commonsense tasks is still far from that of humans. As a preliminary attempt, we propose a simple yet effective method to teach pretrained models with commonsense reasoning by leveraging the structured knowledge in ConceptNet, the largest commonsense knowledge base (KB). Specifically, the structured knowledge in KB allows us to construct various logical forms, and then generate multiple-choice questions requiring commonsense logical reasoning. Experimental results demonstrate that, when refined on these training examples, the pretrained models consistently improve their performance on tasks that require commonsense reasoning, especially in the few-shot learning setting. Besides, we also perform analysis to understand which logical relations are more relevant to commonsense reasoning.
翻译:最近,经过培训的语文模式(如BERT)在许多下游自然语言理解任务方面取得了巨大成功,并表现出一定程度的常识推理能力;然而,这些模式在常识任务方面的表现仍然远远不同于人类。 作为初步尝试,我们提出了一个简单而有效的方法,通过利用概念网这个最大的常识知识库(KB)的结构化知识来教授预知常识模型。 具体地说,KB的结构化知识使我们能够构建各种逻辑形式,然后产生需要常识逻辑推理的多种选择问题。 实验结果表明,这些经过培训的模式在完善这些培训范例时,始终在改进其在需要常识推理的任务方面的表现,特别是在少数的学习环境中。 此外,我们还进行分析,以了解哪些逻辑关系与常识推理更为相关。