A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 108,079 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is $4$ times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI. Moreover, combining MultiNLI with WANLI is more effective than combining it with other NLI augmentation sets. Our results demonstrate the potential of natural language generation techniques to curate NLP datasets of enhanced quality and diversity.
翻译:众包NLP大规模数据集的反复挑战在于,人类作家在编造实例时往往依赖重复模式,导致语言多样性的缺乏。我们引入了基于工人和AI合作创建数据集的新颖方法,将语言模型的基因力和人类的评估力结合在一起。从现有的数据集,即用于自然语言推论的多国家LI(NLI)开始,我们的方法利用数据集制图自动识别显示具有挑战性推理模式的范例,并指示GPT-3以类似模式组成新的范例。然后自动过滤,最终由人类人群工人修改和标注机器生成的例子。由此产生的数据集,WANLI(WANLI)由108 079个实例组成,并展示了现有NLI数据集的独特经验优势。值得注意的是,我们从现有的数据集,即多国家LI(多国家LI)(四倍于四倍于此)的模型在7个外部测试中提高了性能,我们考虑了7个外部测试的性能,包括HANS和Aversarial NLI的9%。此外,将多NNNLI(WLI)和RLI(R)结合的自然质量和NL)的潜力比其他数据增强的生成的升级。