Natural Language Inference (NLI) is considered a representative task to test natural language understanding (NLU). In this work, we propose an extensible framework to collectively yet categorically test diverse Logical reasoning capabilities required for NLI (and by extension, NLU). Motivated by behavioral testing, we create a semi-synthetic large test-bench (363 templates, 363k examples) and an associated framework that offers following utilities: 1) individually test and analyze reasoning capabilities along 17 reasoning dimensions (including pragmatic reasoning), 2) design experiments to study cross-capability information content (leave one out or bring one in); and 3) the synthetic nature enable us to control for artifacts and biases. The inherited power of automated test case instantiation from free-form natural language templates (using CheckList), and a well-defined taxonomy of capabilities enable us to extend to (cognitively) harder test cases while varying the complexity of natural language. Through our analysis of state-of-the-art NLI systems, we observe that our benchmark is indeed hard (and non-trivial even with training on additional resources). Some capabilities stand out as harder. Further fine-grained analysis and fine-tuning experiments reveal more insights about these capabilities and the models -- supporting and extending previous observations. Towards the end we also perform an user-study, to investigate whether behavioral information can be utilised to generalize much better for some models compared to others.
翻译:自然语言推论(NLU)被认为是测试自然语言理解度(NLU)的具有代表性的任务。在这项工作中,我们提议了一个可扩展的框架,以集体而明确地测试自然语言理解度(NLU)所需的多种逻辑推理能力。受行为测试的驱动,我们创建了一个半合成大型测试基准(363个模板,363k例)和相关框架,提供以下公用事业:1) 个人测试和分析17个推理层面(包括务实推理)的推理能力,2) 设计实验,研究交叉可操作性信息内容(留下一个或带一个) ;3) 合成性质使我们能够控制文物和偏见。从自由格式的自然语言模板(使用CryList)中自动测试案例的即时速能力以及明确界定的能力分类,使我们能够扩展到(认知)更难的测试案例,同时改变自然语言的复杂程度。通过我们对国家语言现状系统的分析,我们观察到我们的基准确实很困难(甚至不精细地支持对额外资源进行训练);一些测试案例,并且更准确地分析我们更能更准确地分析。