Parameter-efficient tuning aims at updating only a small subset of parameters when adapting a pretrained model to downstream tasks. In this work, we introduce PASTA, in which we only modify the special token representations (e.g., [SEP] and [CLS] in BERT) before the self-attention module at each layer in Transformer-based models. PASTA achieves comparable performance to full finetuning in natural language understanding tasks including text classification and NER with up to only 0.029% of total parameters trained. Our work not only provides a simple yet effective way of parameter-efficient tuning, which has a wide range of practical applications when deploying finetuned models for multiple tasks, but also demonstrates the pivotal role of special tokens in pretrained language models
翻译:参数效率调准的目的是,在使预先培训的模式适应下游任务时,仅更新一小部分参数。在这项工作中,我们引入了PASTATA,其中我们仅在以变压器为基础的模型中每个层次的自留模块之前修改特别象征性表示(例如[SEP]和BERT中的[CLS])。 PASTA达到与充分微调自然语言理解任务(包括文本分类和NER)完全微调相似的性能,包括文本分类和NER,其总参数中最多只有0.00.029%受过培训。我们的工作不仅提供了一种简单而有效的参数效率调控方法,在为多重任务部署微调模型时,该方法具有广泛的实际应用,而且还展示了特殊标志在预先培训的语言模型中的关键作用。