Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes. These artefacts are exploited by neural networks even when only considering the hypothesis and ignoring the premise, leading to unwanted biases. Belinkov et al. (2019b) proposed tackling this problem via adversarial training, but this can lead to learned sentence representations that still suffer from the same biases. We show that the bias can be reduced in the sentence representations by using an ensemble of adversaries, encouraging the model to jointly decrease the accuracy of these different adversaries while fitting the data. This approach produces more robust NLI models, outperforming previous de-biasing efforts when generalised to 12 other datasets (Belinkov et al., 2019a; Mahabadi et al., 2020). In addition, we find that the optimal number of adversarial classifiers depends on the dimensionality of the sentence representations, with larger sentence representations being more difficult to de-bias while benefiting from using a greater number of adversaries.
翻译:Belinkov等人(2019b)提议通过对抗性培训解决这一问题,但是这可能导致了解仍受相同偏见影响的句子表述方式。我们表明,在句子表述中,使用对手组合可以减少偏见,鼓励模式在适应数据的同时联合降低这些不同对手的准确性。这一方法产生了更强有力的国家语言表达方式,在概括到其他12个数据集(Belinkov等人,2019a;Mahabadi等人,2020年)时,比先前的贬低偏见努力表现得更好。此外,我们发现,最佳的对抗性分类方法取决于句式表述的维度,而较大的句子表述则更难进行贬低,同时受益于更多的对手。