Through exploiting a high level of parallelism enabled by graphics processing units, transformer architectures have enabled tremendous strides forward in the field of natural language processing. In a traditional masked language model, special MASK tokens are used to prompt our model to gather contextual information from surrounding words to restore originally hidden information. In this paper, we explore a task-specific masking framework for pre-trained large language models that enables superior performance on particular downstream tasks on the datasets in the GLUE benchmark. We develop our own masking algorithm, Typhoon, based on token input gradients, and compare this with other standard baselines. We find that Typhoon offers performance competitive with whole-word masking on the MRPC dataset. Our implementation can be found in a public Github Repository.
翻译:通过利用图形处理单元所支持的高度并行计算能力,transformer 架构在自然语言处理领域取得了巨大的进展。在传统的遮蔽语言模型中,使用特殊的 MASK 标记来促使模型从周围的单词中收集上下文信息,以恢复原本隐藏的信息。在本文中,我们探讨了一种针对预训练大型语言模型的任务特定屏蔽框架,该框架能在 GLUE 基准数据集的特定下游任务上实现更高的性能。我们基于令牌输入梯度开发了自己的屏蔽算法 Typhoon,并将其与其他标准基线进行比较。我们发现,在 MRPC 数据集上,Typhoon 的性能竞争力与整词屏蔽相当。我们的实现可以在公共的 Github 代码库中找到。