Exemplification is a process by which writers explain or clarify a concept by providing an example. While common in all forms of writing, exemplification is particularly useful in the task of long-form question answering (LFQA), where a complicated answer can be made more understandable through simple examples. In this paper, we provide the first computational study of exemplification in QA, performing a fine-grained annotation of different types of examples (e.g., hypotheticals, anecdotes) in three corpora. We show that not only do state-of-the-art LFQA models struggle to generate relevant examples, but also that standard evaluation metrics such as ROUGE are insufficient to judge exemplification quality. We propose to treat exemplification as a \emph{retrieval} problem in which a partially-written answer is used to query a large set of human-written examples extracted from a corpus. Our approach allows a reliable ranking-type automatic metrics that correlates well with human evaluation. A human evaluation shows that our model's retrieved examples are more relevant than examples generated from a state-of-the-art LFQA model.
翻译:示例是一个过程,作者通过提供实例来解释或澄清一个概念。 示例在各种书面形式中都很常见, 示例在长式答题( LFQA) 的任务中特别有用, 复杂的答案可以通过简单的例子更容易理解。 在本文中, 我们提供对 QA 中示例的首次计算研究, 对三个公司中的不同类型实例( 例如假设、 anecdotes) 进行细微的批注。 我们的方法允许使用可靠的排序类型自动计量, 与人类评估相关。 人类评价显示, 模型中采集的示例比从一个模型中生成的示例更相关。