MultiWOZ is one of the most popular multi-domain task-oriented dialog datasets, containing 10K+ annotated dialogs covering eight domains. It has been widely accepted as a benchmark for various dialog tasks, e.g., dialog state tracking (DST), natural language generation (NLG), and end-to-end (E2E) dialog modeling. In this work, we identify an overlooked issue with dialog state annotation inconsistencies in the dataset, where a slot type is tagged inconsistently across similar dialogs leading to confusion for DST modeling. We propose an automated correction for this issue, which is present in a whopping 70% of the dialogs. Additionally, we notice that there is significant entity bias in the dataset (e.g., "cambridge" appears in 50% of the destination cities in the train domain). The entity bias can potentially lead to named entity memorization in generative models, which may go unnoticed as the test set suffers from a similar entity bias as well. We release a new test set with all entities replaced with unseen entities. Finally, we benchmark joint goal accuracy (JGA) of the state-of-the-art DST baselines on these modified versions of the data. Our experiments show that the annotation inconsistency corrections lead to 7-10% improvement in JGA. On the other hand, we observe a 29% drop in JGA when models are evaluated on the new test set with unseen entities.
翻译:多 WOZ 是数据库中最受欢迎的多域任务导向对话框数据集之一, 包含 10K+ 附加说明的对话框, 涵盖八个域。 它已被广泛接受为各种对话框任务的基准, 例如对话框跟踪( DST)、 自然语言生成( NLG) 和端对端( E2E) 对话框模型。 在这项工作中, 我们发现一个被忽略的问题, 数据集中存在对话状态注释不一致, 对话状态注释类型在类似对话框中标记不一, 导致 DST 建模的混乱。 我们建议对这一问题进行自动校正, 以显示70%的对话框。 此外, 我们注意到, 数据集中存在显著的实体偏差( 例如, “ cambridge” 出现在列列列列域50%的目的地城市中)。 实体偏差可能导致在基因缩放模型中命名实体的记忆化, 因为测试组也存在类似的实体偏差, 可能会忽略。 我们发布了一个新测试组的测试组群, 由所有被替换为隐形实体的新测试组。 最后, 我们用GA 测试模型中的联合目标精确度( JGA) 测试模型显示新基准中的J- 10 测试模型显示了我们对D- 基准的校正 的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正模式。