Abstractive summarization is the process of generating a summary given a document as input. Although significant progress has been made, the factual inconsistency between the document and the generated summary still limits its practical applications. Previous work found that the probabilities assigned by the generation model reflect its preferences for the generated summary, including the preference for factual consistency, and the preference for the language or knowledge prior as well. To separate the preference for factual consistency, we propose an unsupervised framework named CoP by controlling the preference of the generation model with the help of prompt. More specifically, the framework performs an extra inference step in which a text prompt is introduced as an additional input. In this way, another preference is described by the generation probability of this extra inference process. The difference between the above two preferences, i.e. the difference between the probabilities, could be used as measurements for detecting factual inconsistencies. Interestingly, we found that with the properly designed prompt, our framework could evaluate specific preferences and serve as measurements for fine-grained categories of inconsistency, such as entity-related inconsistency, coreference-related inconsistency, etc. Moreover, our framework could also be extended to the supervised setting to learn better prompt from the labeled data as well. Experiments show that our framework achieves new SOTA results on three factual inconsistency detection tasks.
翻译:抽象总结是指在输入文档的情况下生成摘要的过程。尽管取得了显著的进展,但文档和生成的总结之间的事实不一致性仍限制了其实际应用。先前的工作发现,生成模型分配的概率反映了其对生成总结的偏好,包括事实一致性和语言或知识先验的偏好。为了区分对事实一致性的偏好,我们提出了一种名为CoP的无监督框架,通过使用提示控制生成模型的偏好来进行处理。更具体地说,该框架在额外的推理步骤中引入文本提示作为额外的输入。通过这种方式,另一个偏好通过这个额外推理过程的生成概率来描述。上述两个偏好之间的差异,即概率之间的差异,可以用作检测事实不一致性的测量标准。有趣的是,我们发现使用适当设计的提示,我们的框架可以评估特定的偏好并用作粒度细致的不一致性类别的测量,如实体相关的不一致性,指代相关的不一致性等等。此外,可以通过具有标签的数据学习更好的提示,将我们的框架扩展到监督设置中。实验表明,我们的框架在三个事实不一致性检测任务上取得了新的SOTA结果。