Large pre-trained language models have been shown to encode large amounts of world and commonsense knowledge in their parameters, leading to substantial interest in methods for extracting that knowledge. In past work, knowledge was extracted by taking manually-authored queries and gathering paraphrases for them using a separate pipeline. In this work, we propose a method for automatically rewriting queries into "BERTese", a paraphrase query that is directly optimized towards better knowledge extraction. To encourage meaningful rewrites, we add auxiliary loss functions that encourage the query to correspond to actual language tokens. We empirically show our approach outperforms competing baselines, obviating the need for complex pipelines. Moreover, BERTese provides some insight into the type of language that helps language models perform knowledge extraction.
翻译:经过培训的大型语言模型已被展示为将大量世界和常识知识纳入其参数,从而对获取知识的方法产生极大兴趣。在过去的工作中,通过人工编写查询并使用单独的管道为其收集副词句来提取知识。在这项工作中,我们提出了一个将查询自动改写为“BERTETESE”的方法,这是一个直接优化到更好的知识提取的参数查询。为了鼓励有意义的重写,我们添加了辅助性损失功能,鼓励查询与实际语言符号相对应。我们从经验上展示了我们的方法优于相互竞争的基线,避免了对复杂管道的需求。此外,BERTESE对有助于语言模型进行知识提取的语言类型提供了一些洞察力。