Without labeled question-answer pairs for necessary training, unsupervised commonsense question-answering (QA) appears to be extremely challenging due to its indispensable unique prerequisite on commonsense source like knowledge bases (KBs), which are usually highly resource consuming in construction. Recently pre-trained language models (PrLMs) show effectiveness as an alternative for commonsense clues when they play a role of knowledge generator. However, existing work simply generates hundreds of pseudo-answers, or roughly performs knowledge generation according to templates once for all, which may result in much noise and thus hinders the quality of generated knowledge. Motivated by human thinking experience, we propose an approach of All-round Thinker (ArT) by fully taking association during knowledge generating. In detail, our model first focuses on key parts in the given context, and then generates highly related knowledge on such a basis in an association way like human thinking. Besides, for casual reasoning, a reverse thinking mechanism is proposed to conduct bidirectional inferring between cause and effect. ArT is totally unsupervised and KBs-free. We evaluate it on three commonsense QA benchmarks: COPA, SocialIQA and SCT. On all scales of PrLM backbones, ArT shows its brilliant performance and outperforms previous advanced unsupervised models.
翻译:由于在知识库等常识源(KBs)上有着不可或缺的独特先决条件,例如知识库(KBs),通常在建设过程中耗费大量资源。最近,经过预先培训的语言模型(PrLMs)显示,当常识线索发挥知识生成者的作用时,它们作为常识线索的一种替代方法是有效的。然而,现有工作仅仅产生数百个假答案,或粗略地根据模板对所有人进行一次知识生成,这可能造成很多噪音,从而妨碍知识的生成质量。受人类思维经验的驱动,我们建议采用全方位思考者(ArT)的方法,在知识生成过程中充分结合。详细而言,我们的模式首先侧重于特定背景下的关键部分,然后以类似人类思维的方式在这种基础上产生高度相关的知识。此外,为了随意推理,建议一种反向思考机制,在因果关系之间进行双向推论,这可能造成很多噪音,从而妨碍知识的产生质量质量。基于人类思维经验的经验,我们建议采用全方位思考者的方法,在创造知识库外进行全方位思考。我们从三个共同标准上评估了全局性A的高级标准。