Multiple parallel attention mechanisms that use multiple attention heads facilitate greater performance of the Transformer model for various applications e.g., Neural Machine Translation (NMT), text classification. In multi-head attention mechanism, different heads attend to different parts of the input. However, the limitation is that multiple heads might attend to the same part of the input, resulting in multiple heads being redundant. Thus, the model resources are under-utilized. One approach to avoid this is to prune least important heads based on certain importance score. In this work, we focus on designing a Dynamic Head Importance Computation Mechanism (DHICM) to dynamically calculate the importance of a head with respect to the input. Our insight is to design an additional attention layer together with multi-head attention, and utilize the outputs of the multi-head attention along with the input, to compute the importance for each head. Additionally, we add an extra loss function to prevent the model from assigning same score to all heads, to identify more important heads and improvise performance. We analyzed performance of DHICM for NMT with different languages. Experiments on different datasets show that DHICM outperforms traditional Transformer-based approach by large margin, especially, when less training data is available.
翻译:使用多重关注头的多重平行关注机制可以促进各种应用程序的变换模型的更大性能,例如神经机器翻译(NMT),文本分类。在多头关注机制中,不同头处理输入的不同部分。然而,限制是,多个头处理输入的同一部分,造成多头冗余。因此,模型资源利用不足。避免这种情况的一个办法是根据某些重要分数将最小的头部压低。在这项工作中,我们侧重于设计动态头重度计算机制(DHICM),以动态地计算头对输入的重要性。我们的洞察力是设计一个与多头关注一起的更多关注层,利用多头关注的输出与输入一起计算每个头重度的重要性。此外,我们增加一个额外损失功能,以防止模型根据某些重要分将同一个分数分配给所有头,确定更重要的头和即时性能。我们用不同语言分析了DHICM对NMT的性能。对不同数据集的性能进行了动态分析,对不同数据集的实验显示DHICMM在传统的变差值上,特别是以较低的变压式方式进行的数据。