Unsupervised commonsense question answering requires mining effective commonsense knowledge without the rely on the labeled task data. Previous methods typically retrieved from traditional knowledge bases or used pre-trained language models (PrLMs) to generate fixed types of knowledge, which have poor generalization ability. In this paper, we aim to address the above limitation by leveraging the implicit knowledge stored in PrLMs and propose a two-stage prompt-based unsupervised commonsense question answering framework (TSGP). Specifically, we first use knowledge generation prompts to generate the knowledge required for questions with unlimited types and possible candidate answers independent of specified choices. Then, we further utilize answer generation prompts to generate possible candidate answers independent of specified choices. Experimental results and analysis on three different commonsense reasoning tasks, CommonsenseQA, OpenBookQA, and SocialIQA, demonstrate that TSGP significantly improves the reasoning ability of language models in unsupervised settings. Our code is available at: https://github.com/Yueqing-Sun/TSGP.
翻译:未经监督的常见问题解答要求不依赖标签的任务数据而开发有效的常识知识。 以往的方法通常从传统知识库中检索或使用预先培训的语言模型(PrLMs)来生成固定类型的知识,这些知识的概括性能力较差。 在本文中,我们的目标是通过利用PrLMs中储存的隐含知识来解决上述限制问题,并提出一个基于两个阶段的基于快速且不受监督的常识解答框架(TSGP )。具体地说,我们首先利用知识生成来生成为那些具有无限类型的问题所需的知识,以及独立于特定选择的可能候选答案。然后,我们进一步利用答案生成的速率来生成可能的候选答案,而不受特定选择的影响。关于三种不同的常识推理任务的实验结果和分析,即PussenseQA、OpenBookQA和社会QA,表明TsGP在非超人环境下极大地提高了语言模型的推理能力。我们的代码可以在https://github.com/Yueqing-Sun/TSGP。