Natural Language Inference (NLI) models are known to learn from biases and artefacts within their training data, impacting how well they generalise to other unseen datasets. Existing de-biasing approaches focus on preventing the models from learning these biases, which can result in restrictive models and lower performance. We instead investigate teaching the model how a human would approach the NLI task, in order to learn features that will generalise better to previously unseen examples. Using natural language explanations, we supervise the model's attention weights to encourage more attention to be paid to the words present in the explanations, significantly improving model performance. Our experiments show that the in-distribution improvements of this method are also accompanied by out-of-distribution improvements, with the supervised models learning from features that generalise better to other NLI datasets. Analysis of the model indicates that human explanations encourage increased attention on the important words, with more attention paid to words in the premise and less attention paid to punctuation and stop-words.
翻译:自然语言推断模型(NLI)据知可以从培训数据中的偏见和手工艺品中学习,从而影响它们对其他不可见数据集的概括性; 现有的不偏向方法侧重于防止模型学习这些偏向,从而导致限制性模式和低性能; 我们研究模型如何用人来对待国家语言推断任务,以便学习能更好概括以往所见实例的特征; 使用自然语言解释,我们监督模型的注意权重,鼓励人们更多地注意解释中的现有词,大大改进模型性能; 我们的实验表明,在改进分配方法的同时,还改进了分配范围,同时监督模型从能够更好地概括其他国家语言分类数据集的特征中学习。 对模型的分析表明,人类的解释鼓励人们更多地注意重要词,更多地注意前言中的文字,较少注意标注和断语。