For summarization, human preference is critical to tame outputs of the summarizer in favor of human interests, as ground-truth summaries are scarce and ambiguous. Practical settings require dynamic exchanges between human and AI agent wherein feedback is provided in an online manner, a few at a time. In this paper, we introduce a new framework to train summarization models with preference feedback interactively. By properly leveraging offline data and a novel reward model, we improve the performance regarding ROUGE scores and sample-efficiency. Our experiments on three various datasets confirm the benefit of the proposed framework in active, few-shot and online settings of preference learning.
翻译:简而言之,人类偏好对于为了人类利益而驯服摘要的输出至关重要,因为地面实况摘要很少而且含糊不清。实际环境要求人类和AI代理机构进行动态交流,同时以在线方式提供反馈,本文介绍一个新的框架,以互动方式培训概括模型和偏好反馈。通过适当利用离线数据和新颖的奖励模式,我们提高了ROUGE分数和抽样效率方面的绩效。我们在三个不同的数据集上进行的实验证实了拟议的框架在积极、少见和在线的优惠学习环境中的益处。