Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained models, substantial amounts of hallucinated content are found during the human evaluation. Pre-trained models are most commonly fine-tuned with cross-entropy loss for text summarization, which may not be an optimal strategy. In this work, we provide a typology of factual errors with annotation data to highlight the types of errors and move away from a binary understanding of factuality. We further propose a training strategy that improves the factual consistency and overall quality of summaries via a novel contrastive fine-tuning, called ConFiT. Based on our linguistically-informed typology of errors, we design different modular objectives that each target a specific type. Specifically, we utilize hard negative samples with errors to reduce the generation of factual inconsistency. In order to capture the key information between speakers, we also design a dialogue-specific loss. Using human evaluation and automatic faithfulness metrics, we show that our model significantly reduces all kinds of factual errors on the dialogue summarization, SAMSum corpus. Moreover, our model could be generalized to the meeting summarization, AMI corpus, and it produces significantly higher scores than most of the baselines on both datasets regarding word-overlap metrics.
翻译:生成摘要中的事实不一致严重限制了抽象对话总结的实际应用。虽然通过使用预先培训的模型取得了显著进展,但在人文评估期间发现了大量的致幻内容。经过培训的模型通常与文本总结的跨天体损失最相近,这也许不是最佳战略。在这项工作中,我们用注释数据对事实错误进行分类,以突出错误类型,摆脱对事实质量的二进制理解。我们进一步提议了一项培训战略,通过新颖的对比性微调(称为ConFiT),提高摘要的事实一致性和总体质量。我们根据语言上了解的错误类型设计了不同的模块目标。具体地说,我们利用带有错误的硬否定样本来减少实际不一致的产生。为了获取发言者之间的关键信息,我们还设计了具体对话损失类型。我们用人文评估和自动忠实度度量度测量方法,我们提出的模式大大减少了对话总结和对比的所有事实错误,称为ConFiT。此外,我们的大多数模型都可比标准标准级化为标准级和标准级。