Deep neural models (e.g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness. This paper advances the self-attention mechanism to its robust variant for Transformer-based pre-trained language models (e.g. BERT). We propose \textit{Adversarial Self-Attention} mechanism (ASA), which adversarially biases the attentions to effectively suppress the model reliance on features (e.g. specific keywords) and encourage its exploration of broader semantics. We conduct a comprehensive evaluation across a wide range of tasks for both pre-training and fine-tuning stages. For pre-training, ASA unfolds remarkable performance gains compared to naive training for longer steps. For fine-tuning, ASA-empowered models outweigh naive models by a large margin considering both generalization and robustness.
翻译:深神经模型(如变异器)自然会学习假的特征,这些特征在标签和投入之间制造了“shortcut'”,从而损害一般化和稳健性。本文件将自我注意机制推进到基于变异器的预先培训语言模型(如BERT)的稳健变体中。我们提议了\textit{Adversarial Self-Atrest}机制(ASASA),这种机制在对抗上偏向于注意有效抑制模型对特征的依赖(如特定关键词)并鼓励其探索更广泛的语义。我们从一般化和稳健的角度对培训前阶段和微调阶段的广泛任务进行全面评价。在培训前,ASA展示了与对长期步骤的天真的培训相比的显著业绩增益。关于微调,ASA增强型模型比天性模型要大得多,考虑到通用性和稳健性。