Research on Automatic Story Generation (ASG) relies heavily on human and automatic evaluation. However, there is no consensus on which human evaluation criteria to use, and no analysis of how well automatic criteria correlate with them. In this paper, we propose to re-evaluate ASG evaluation. We introduce a set of 6 orthogonal and comprehensive human criteria, carefully motivated by the social sciences literature. We also present HANNA, an annotated dataset of 1,056 stories produced by 10 different ASG systems. HANNA allows us to quantitatively evaluate the correlations of 72 automatic metrics with human criteria. Our analysis highlights the weaknesses of current metrics for ASG and allows us to formulate practical recommendations for ASG evaluation.
翻译:关于自动故事一代(ASG)的研究严重依赖人和自动评价,但是,对于人类评价标准使用哪些标准,没有达成共识,也没有分析自动标准与这些标准之间的关系。我们建议重新评估ASG评价。我们提出了一套由社会科学文献仔细推动的6个正统和全面的人标准。我们还介绍了由10个ASG系统制作的1 056个附带说明的HANNA数据集。HANNA使我们能够从数量上评价72个自动衡量标准与人标准的相关性。我们的分析突出了ASG现有衡量标准的弱点,并使我们能够为ASG评价提出切实可行的建议。