Recently proposed BERT-based evaluation metrics for text generation perform well on standard benchmarks but are vulnerable to adversarial attacks, e.g., relating to information correctness. We argue that this stems (in part) from the fact that they are models of semantic similarity. In contrast, we develop evaluation metrics based on Natural Language Inference (NLI), which we deem a more appropriate modeling. We design a preference-based adversarial attack framework and show that our NLI based metrics are much more robust to the attacks than the recent BERT-based metrics. On standard benchmarks, our NLI based metrics outperform existing summarization metrics, but perform below SOTA MT metrics. However, when combining existing metrics with our NLI metrics, we obtain both higher adversarial robustness (15%-30%) and higher quality metrics as measured on standard benchmarks (+5% to 30%).
翻译:最近基于 BERT 的文本生成评价度量表在标准基准测试上表现良好,但易受到对信息正确性的攻击。我们认为这部分原因在于它们是语义相似性的模型。相比之下,我们基于自然语言推理(NLI)提出了评估指标,认为这是更适当的建模方式。我们设计了一个基于偏好的对抗攻击框架,并表明我们基于 NLI 的度量比最近的基于 BERT 的度量更加鲁棒。在标准基准测试中,我们基于 NLI 的度量优于现有的摘要度量,但低于 SOTA 机器翻译度量。但是,当我们将现有的度量与基于 NLI 的度量相结合时,我们增强了对抗攻击的韧性(15%-30%),并获得了标准基准测试上的更高质量的度量指标(+5%至30%)。