Paraphrase generation is a difficult problem. This is not only because of the limitations in text generation capabilities but also due that to the lack of a proper definition of what qualifies as a paraphrase and corresponding metrics to measure how good it is. Metrics for evaluation of paraphrasing quality is an on going research problem. Most of the existing metrics in use having been borrowed from other tasks do not capture the complete essence of a good paraphrase, and often fail at borderline-cases. In this work, we propose a novel metric $ROUGE_P$ to measure the quality of paraphrases along the dimensions of adequacy, novelty and fluency. We also provide empirical evidence to show that the current natural language generation metrics are insufficient to measure these desired properties of a good paraphrase. We look at paraphrase model fine-tuning and generation from the lens of metrics to gain a deeper understanding of what it takes to generate and evaluate a good paraphrase.
翻译:引言生成是一个困难的问题。这不仅是因为文本生成能力的限制,而且因为没有适当界定何谓参数和相应的计量标准来衡量其质量如何良好。对参数质量的评价尺度是一个持续的研究问题。从其他任务中借用的现有计量标准大多没有反映好副词的完整精髓,而且往往在边际案件中失败。在这项工作中,我们建议采用一个新的计量单位$ROUGE_P$,以衡量在适当、新颖和流畅的层面的方言质量。我们还提供经验证据,表明当前自然语言生成的计量标准不足以衡量好副词的这些预期特性。我们从计量角度审视参数模型的微调和生成,以便更深入地了解生成和评价好副词所需的内容。