A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 107,885 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI improves performance on eight out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI, compared to training on the 4x larger MultiNLI. Moreover, it continues to be more effective than MultiNLI augmented with other NLI datasets. Our results demonstrate the promise of leveraging natural language generation techniques and re-imagining the role of humans in the dataset creation process.
翻译:众包NLP大规模数据集的反复挑战在于,人类作家在编造实例时往往依赖重复模式,导致语言多样性的缺乏。我们引入了基于工人和AI合作创建数据集的新颖方法,将语言模型的基因力和人类的评估力汇集在一起。从现有的数据集,即用于自然语言推理的多国家LI(MultiNLI)开始,我们的方法使用数据集制图自动识别显示具有挑战性推理模式的实例,并指示GPT-3以类似模式构建新的范例。然后自动过滤,最终由人类人群工人修改和标注。由此产生的数据集(WANLI)由107,885个NLI实例组成,并展示了现有NLI数据集的独特经验优势。值得注意的是,对WANLI模型的培训提高了八项外部测试的性能,包括HANNS的11%和Adversarial NLI的9%,与关于4x更大的多国家LI的培训相比,它继续比多国家LI的多机构化方法更为有效。此外,它继续比多国家LI(M)系统在利用其他自然数据生成过程中提升了人类数据的作用。