Deep neural models (e.g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness. This paper advances self-attention mechanism to its robust variant for Transformer-based pre-trained language models (e.g. BERT). We propose \textit{Adversarial Self-Attention} mechanism (ASA), which adversarially biases the attentions to effectively suppress the model reliance on features (e.g. specific keywords) and encourage its exploration of broader semantics. We conduct comprehensive evaluation across a wide range of tasks for both pre-training and fine-tuning stages. For pre-training, ASA unfolds remarkable performance gain compared to naive training for longer steps. For fine-tuning, ASA-empowered models outweigh naive models by a large margin considering both generalization and robustness.
翻译:深神经模型(如变异器)自然会学习假的特征,这些特征在标签和投入之间制造了“shortcut'”,从而损害一般化和稳健性。本文件将自我注意机制推进到基于变异器的预先培训语言模型(如BERT)的强健变体。我们提议了\textit{Adversarial自控机制(ASA),这种机制在对抗上偏重于有效抑制模型依赖特征(如特定关键词)并鼓励其探索更广泛的语义学。我们全面评价了培训前和微调阶段的广泛任务。对于培训前和微调阶段,ASA展示了与更长时间的幼稚培训相比的显著业绩增益。关于微调,ASA的增强型模型在考虑一般化和稳健性这两个方面大大超过天真模型。