Recent studies reveal that various biases exist in different NLP tasks, and over-reliance on biases results in models' poor generalization ability and low adversarial robustness. To mitigate datasets biases, previous works propose lots of debiasing techniques to tackle specific biases, which perform well on respective adversarial sets but fail to mitigate other biases. In this paper, we propose a new debiasing method Sparse Mixture-of-Adapters (SMoA), which can mitigate multiple dataset biases effectively and efficiently. Experiments on Natural Language Inference and Paraphrase Identification tasks demonstrate that SMoA outperforms full-finetuning, adapter tuning baselines, and prior strong debiasing methods. Further analysis indicates the interpretability of SMoA that sub-adapter can capture specific pattern from the training data and specialize to handle specific bias.
翻译:最近的研究显示,不同的国家劳工政策任务中存在各种偏见,过分依赖偏见导致模型的概括能力差,对抗性强弱。 为了减少数据集偏差,先前的工作提出了许多偏差技术来解决具体的偏差,这些方法在不同的对立组合上表现良好,但未能减少其他偏差。在本文中,我们提出了一种新的偏差方法Sprocis Mixture-Adapters(SMoA),它能够有效和高效地减少多重数据集偏差。关于自然语言推理和参数识别的实验表明,SMoA优于全面调整、调整器调试基线和以前强力的偏差方法。进一步的分析表明SMoA的可解释性,即子调整器能够从培训数据中捕捉到特定模式,并专门处理特定偏差。</s>