Transferring learned patterns from pretrained neural language models has been shown to significantly improve effectiveness across a variety of language-based tasks, meanwhile further tuning on intermediate tasks has been demonstrated to provide additional performance benefits, provided the intermediate task is sufficiently related to the target task. However, how to identify related tasks is an open problem, and brute-force searching effective task combinations is prohibitively expensive. Hence, the question arises, are we able to improve the effectiveness and efficiency of tasks with no training examples through selective fine-tuning? In this paper, we explore statistical measures that approximate the divergence between domain representations as a means to estimate whether tuning using one task pair will exhibit performance benefits over tuning another. This estimation can then be used to reduce the number of task pairs that need to be tested by eliminating pairs that are unlikely to provide benefits. Through experimentation over 58 tasks and over 6,600 task pair combinations, we demonstrate that statistical measures can distinguish effective task pairs, and the resulting estimates can reduce end-to-end runtime by up to 40%.
翻译:事实证明,从经过培训的神经语言模型中吸取的转移模式大大提高了各种语言任务的效力,同时进一步调整了中间任务,以提供额外的绩效效益,条件是中间任务与目标任务充分相关。然而,如何确定相关任务是一个未决问题,而粗力搜索有效任务组合的费用太高。因此,问题是,我们是否能够通过有选择的微调来提高任务的效力和效率,而没有培训范例?在本文件中,我们探讨一些统计措施,以缩小域表之间的差异,以此估计使用一对任务调整是否会在调整另一对任务后产生绩效效益。然后,这一估计可用于减少需要通过消除不可能带来效益的对夫妇来测试的任务组合的数量。通过对58项任务和6,600项任务组合的实验,我们证明统计措施能够区分有效的对任务组合,由此得出的估计可以将端到端的运行时间减少40%。</s>