Pretrained language models (PLMs) have been shown to accumulate factual knowledge during pretrainingng (Petroni et al., 2019). Recent works probe PLMs for the extent of this knowledge through prompts either in discrete or continuous forms. However, these methods do not consider symmetry of the task: object prediction and subject prediction. In this work, we propose Symmetrical Prompt Enhancement (SPE), a continuous prompt-based method for factual probing in PLMs that leverages the symmetry of the task by constructing symmetrical prompts for subject and object prediction. Our results on a popular factual probing dataset, LAMA, show significant improvement of SPE over previous probing methods.
翻译:培训前语言模型(PLMs)显示,在培训前积累事实知识(Petroni等人,2019年),最近的工作通过独立或连续的提示来探测这种知识的范围,但是,这些方法并不考虑任务的对称性:对象预测和主题预测。在这项工作中,我们提出对称性即时增强(SPE),这是对称性快速探测(SPE)的一种持续、即时的对称方法,它通过建立对称的对称性提示来进行主题和对象预测。我们对流行的事实验证数据集LAMA的结果显示,SPE比以前的探测方法有了显著的改进。