A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 107,885 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is $4$ times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI. Moreover, combining MultiNLI with WANLI is more effective than combining it with other NLI augmentation sets. Our results demonstrate the potential of natural language generation techniques to curate NLP datasets of enhanced quality and diversity.
翻译:众包NLP大规模数据集的反复挑战在于,人类作家在编造实例时往往依赖重复模式,导致语言多样性的缺乏。我们引入了基于工人和AI合作创建数据集的新颖方法,将语言模型的基因力和人类的评估力结合在一起。从现有的数据集,即用于自然语言推论的多国家LI(NLI)开始,我们的方法利用数据集制图自动识别显示具有挑战性推理模式的范例,并指示GPT-3以类似模式组成新的范例。然后自动过滤,最终由人类人群工人修改和标注机器生成的例子。由此产生的数据集(WANLI)由107,885个NLI实例组成,并展示了现有NLI数据集的独特经验优势。值得注意的是,从现有的数据集,即多国家LI(多国家LI)培训一个模型,而不是多国家语言推导论(四倍于四倍),用七种外部测试的性能提高了我们所考虑的性能,包括HANS和Aversarial NLI的9%。此外,将多NNNLI和RLI(R)结合的自然增的自然结果比我们更有效。