Parameter-efficient methods (like Prompt or Adapters) for adapting pre-trained language models to downstream tasks have been popular recently. However, hindrances still prevent these methods from reaching their full potential. For example, two significant challenges are few-shot adaptation and cross-task generalization ability. To tackle these issues, we propose a general framework to enhance the few-shot adaptation and cross-domain generalization ability of parameter-efficient methods. In our framework, we prime the self-supervised model for parameter-efficient methods to rapidly adapt to various downstream few-shot tasks. To evaluate the authentic generalization ability of these parameter-efficient methods, we conduct experiments on a few-shot cross-domain benchmark containing 160 diverse NLP tasks. The experiment result reveals that priming by tuning PLM only with extra training tasks leads to the best performance. Also, we perform a comprehensive analysis of various parameter-efficient methods under few-shot cross-domain scenarios.
翻译:使经过训练的语言模型适应下游任务的参数效率方法(如快速或适应器)最近很受欢迎,但障碍仍然阻碍这些方法充分发挥其潜力。例如,有两个重大挑战是微小的适应和跨任务概括能力。为了解决这些问题,我们提议了一个总体框架,以加强参数效率方法的微小适应和跨域概括能力。在我们的框架里,我们将参数效率方法的自我监督模型作为快速适应各种下游的微小任务的基础。为了评价这些参数效率方法的真实概括能力,我们实验了包含160项不同 NLP 任务的微小的跨域基准。实验结果表明,只用额外培训任务来调整PLM,就能取得最佳的性能。此外,我们还根据微小的跨领域假设,对各种参数效率方法进行全面分析。