The release of large natural language inference (NLI) datasets like SNLI and MNLI have led to rapid development and improvement of completely neural systems for the task. Most recently, heavily pre-trained, Transformer-based models like BERT and MT-DNN have reached near-human performance on these datasets. However, these standard datasets have been shown to contain many annotation artifacts, allowing models to shortcut understanding using simple fallible heuristics, and still perform well on the test set. So it is no surprise that many adversarial (challenge) datasets have been created that cause models trained on standard datasets to fail dramatically. Although extra training on this data generally improves model performance on just that type of data, transferring that learning to unseen examples is still partial at best. This work evaluates the failures of state-of-the-art models on existing adversarial datasets that test different linguistic phenomena, and find that even though the models perform similarly on MNLI, they differ greatly in their robustness to these attacks. In particular, we find syntax-related attacks to be particularly effective across all models, so we provide a fine-grained analysis and comparison of model performance on those examples. We draw conclusions about the value of model size and multi-task learning (beyond comparing their standard test set performance), and provide suggestions for more effective training data.
翻译:最近,如BERT和MT-DNNN等经过大量事先培训的、基于变异的模型已经在这些数据集上达到接近人的性能。然而,这些标准数据集显示含有许多注释性人工制品,使模型能够利用简单易理解的超自然理论和MNLI(NNLI)等大型自然语言推断数据集来绕过理解,并且仍然在测试集中表现良好。因此,许多对抗性(对称)数据集的创建导致标准数据集培训模型的失败并不奇怪。尽管关于这些数据的额外培训通常能改善仅仅这类数据的模型性能,但将这种学习转移到无形实例的最好不过部分。这项工作评估了测试不同语言现象的现有对抗性数据集中的最新性模型的失败,发现即使模型在测试不同语言现象时表现相似,它们与这些攻击的强健性有很大差异。我们发现,与这些标准性能相比性能相比,我们发现与这些模型相关的标准性能分析模型和多级性能分析非常精确,我们发现这些与性能比较模型相关的模型和多级性能分析模型,我们发现这些测试性能的模型和多级性能。