Current pre-trained language models have lots of knowledge, but a more limited ability to use that knowledge. Bloom's Taxonomy helps educators teach children how to use knowledge by categorizing comprehension skills, so we use it to analyze and improve the comprehension skills of large pre-trained language models. Our experiments focus on zero-shot question answering, using the taxonomy to provide proximal context that helps the model answer questions by being relevant to those questions. We show targeting context in this manner improves performance across 4 popular common sense question answer datasets.
翻译:目前受过训练的语文模式拥有许多知识,但使用这种知识的能力却比较有限。 布鲁姆的分类学帮助教育者通过对理解技能进行分类来教育儿童如何使用知识,因此我们用它来分析和提高受过训练的大型语文模式的理解技能。 我们的实验侧重于零点回答问题,利用分类学来提供最接近的上下文,通过与这些问题相关的方式帮助模型回答问题。 我们用这种方式显示目标选择背景可以改善四个普遍通用的常识问答数据集的性能。