Without labeled question-answer pairs for necessary training, unsupervised commonsense question-answering (QA) appears to be extremely challenging due to its indispensable unique prerequisite on commonsense source like knowledge bases (KBs), which are usually highly resource consuming in construction. Recently pre-trained language models (PLMs) show effectiveness as an alternative for commonsense clues when they play a role of knowledge generator. However, existing work either relies on large-scale in-domain or out-of-domain labeled data, or fails to generate knowledge of high quality in a general way. Motivated by human thinking experience, we propose an approach of All-round Thinker (ArT) by fully taking association during knowledge generating. In detail, our model first focuses on key parts in the given context, and then generates highly related knowledge on such a basis in an association way like human thinking. Besides, for causal reasoning, a reverse thinking mechanism is especially added to further enhance bidirectional inferring between cause and effect. ArT is totally unsupervised and KBs-free. We evaluate it on three commonsense QA benchmarks: COPA, SocialIQA and SCT. On all scales of PLM backbones, ArT shows its brilliant performance and outperforms previous advanced unsupervised models. Our code is available at https://github.com/WangJW424/commonsenseQA-ArT.
翻译:无需贴标签的问答对口进行必要的培训,未经监督的普通问题解答(QA)似乎极具挑战性,因为它在知识库(KBs)等常识源上有着不可或缺的独特先决条件,而知识库(KBs)通常在建设过程中耗费大量资源。最近经过预先培训的语言模型(PLMs)显示,当常识线索发挥知识生成者的作用时,它们作为常识线索的一种替代方法是有效的。然而,现有的工作要么依赖于大型的域内或域外标签数据,要么未能以一般方式产生高质量的知识。受人类思维经验的激励,我们建议采用全局智者(ArT)的方法,在知识生成期间充分结合。详细来说,我们的模式首先侧重于特定背景下的关键部分,然后以类似人类思维的方式生成高度相关的知识。此外,出于因果关系推理,特别增加了反向思维机制,以进一步加强因果关系和效果之间的双向推断。ART是完全不受监督的和KBTs自由的。我们在三个通用的A-RBQS-QASBS-SBS refreal A a balA a bass a bass apress apress apress apress a breal Arass a bregresental As pass as pass as pass as pass as pass dealdaldaldalstrutdalbaldaldaldaldaldalds bass as saldaldalds salbaldaldaldaldaldsmalbaldaldaldaldaldaldsssss sald.