Conceptual knowledge is fundamental to human cognition and knowledge bases. However, existing knowledge probing works only focus on evaluating factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge. Since conceptual knowledge often appears as implicit commonsense behind texts, designing probes for conceptual knowledge is hard. Inspired by knowledge representation schemata, we comprehensively evaluate conceptual knowledge of PLMs by designing three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts, respectively. For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark. Extensive experiments on different sizes and types of PLMs show that existing PLMs systematically lack conceptual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing human-like cognition in PLMs. COPEN and our codes are publicly released at https://github.com/THU-KEG/COPEN.
翻译:概念知识是人类认知和知识基础的基础。但是,现有知识检验工作的重点只是评估预先培训的语言模型的实际知识,忽视概念知识。由于概念知识往往在文本中隐含常识,因此设计概念知识的探索十分困难。在知识代表体系的启发下,我们通过设计三项任务来全面评价PLM概念知识,以调查PLM是否分别通过概念相似、学习概念属性和概念化实体来组织实体。关于任务,我们收集并通知了涵盖393个概念的24k数据实例,即COPEN, 一种概念性知识检验 bENchmark。关于不同规模和类型的PLMM的广泛实验表明,现有的PLMS系统缺乏概念知识,并受到各种虚假关联的影响。我们认为,这是在PLMs中实现类似人类认知的关键瓶颈。COPEN和我们的守则在https://github.com/THU-KEG/COPEN上公开发布。