Prevailing deep models are single-purpose and overspecialize at individual tasks. However, when being extended to new tasks, they typically forget previously learned skills and learn from scratch. We address this issue by introducing SkillNet-NLU, a general-purpose model that stitches together existing skills to learn new tasks more effectively. The key feature of our approach is that it is sparsely activated guided by predefined skills. Different from traditional dense models that always activate all the model parameters, SkillNet-NLU only activates parts of the model parameters whose skills are relevant to the target task. When learning for a new task, our approach precisely activates required skills and also provides an option to add new skills. We evaluate on natural language understandings tasks and have the following findings. First, with only one model checkpoint, SkillNet-NLU performs better than task-specific fine-tuning and two multi-task learning baselines (i.e., dense model and Mixture-of-Experts model) on six tasks. Second, sparsely activated pre-training further improves the overall performance. Third, SkillNet-NLU significantly outperforms baseline systems when being extended to new tasks.
翻译:深层次的模型是单一目的的,在个别任务中过于专业化。但是,当它们被扩展为新任务时,通常会忘记以前学到的技能并从零开始学习。我们通过引入SkillNet-NLU来解决这个问题。SkillNet-NLU是一个通用模型,将现有技能结合起来,以便更有效地学习新任务。我们的方法的主要特征是,它以预先界定的技能为指导,没有多少活动。不同于总是激活所有模型参数的传统密集模型,SkillNet-NLU只激活与目标任务相关的部分模型参数。当我们学习新任务时,我们的方法精确地激活了所需的技能,并且提供了增加新技能的选择。我们评估自然语言理解任务,并得出以下结论。首先,只有一个模式检查站,SkillNet-NLU比特定任务的微调好,在六个任务上有两个多任务学习基线(即密集模型和研究的Mixtry-Explats模型) 。第二,小规模的启动前训练进一步改进了总体性能。第三,SkillNet-NL在将新基线任务大大扩展到新任务时,在新的系统之外。