Datasets are not only resources for training accurate, deployable systems, but are also benchmarks for developing new modeling approaches. While large, natural datasets are necessary for training accurate systems, are they necessary for driving modeling innovation? For example, while the popular SQuAD question answering benchmark has driven the development of new modeling approaches, could synthetic or smaller benchmarks have led to similar innovations? This counterfactual question is impossible to answer, but we can study a necessary condition: the ability for a benchmark to recapitulate findings made on SQuAD. We conduct a retrospective study of 20 SQuAD modeling approaches, investigating how well 32 existing and synthesized benchmarks concur with SQuAD -- i.e., do they rank the approaches similarly? We carefully construct small, targeted synthetic benchmarks that do not resemble natural language, yet have high concurrence with SQuAD, demonstrating that naturalness and size are not necessary for reflecting historical modeling improvements on SQuAD. Our results raise the intriguing possibility that small and carefully designed synthetic benchmarks may be useful for driving the development of new modeling approaches.
翻译:数据集不仅是用于培训准确、可部署的系统的资源,而且也是开发新模型方法的基准。虽然大型自然数据集对于培训准确系统是必要的,但对于推动模型创新是否必要?例如,尽管广受欢迎的SQUAD问题回答基准驱动了新模型方法的开发,合成基准或较小的基准能否导致类似的创新?这个反事实问题无法回答,但我们可以研究一个必要的条件:制定基准以复述关于SQUAD的调查结果的能力。我们对20个SQUAD模型方法进行回顾性研究,调查32个现有和综合基准与SQUAD一致的情况,即它们是否对方法进行类似的排序?我们仔细地构建了与自然语言不相近但与SQUAD高度一致的小型、有针对性的合成基准,表明自然性和规模对于反映SQAD的历史模型改进并不必要。我们的结果使人产生一种奇怪的可能性,即小型和精心设计的合成基准对于推动新的模型方法的开发可能有用。