Multi-Task Learning (MTL) is widely-accepted in Natural Language Processing as a standard technique for learning multiple related tasks in one model. Training an MTL model requires having the training data for all tasks available at the same time. As systems usually evolve over time, (e.g., to support new functionalities), adding a new task to an existing MTL model usually requires retraining the model from scratch on all the tasks and this can be time-consuming and computationally expensive. Moreover, in some scenarios, the data used to train the original training may be no longer available, for example, due to storage or privacy concerns. In this paper, we approach the problem of incrementally expanding MTL models' capability to solve new tasks over time by distilling the knowledge of an already trained model on n tasks into a new one for solving n+1 tasks. To avoid catastrophic forgetting, we propose to exploit unlabeled data from the same distributions of the old tasks. Our experiments on publicly available benchmarks show that such a technique dramatically benefits the distillation by preserving the already acquired knowledge (i.e., preventing up to 20% performance drops on old tasks) while obtaining good performance on the incrementally added tasks. Further, we also show that our approach is beneficial in practical settings by using data from a leading voice assistant.
翻译:多任务学习(MTL)在自然语言处理中被广泛接受,是学习一个模式中多重相关任务的标准技术。培训MTL模式需要同时掌握所有任务的培训数据。随着系统通常随着时间的演变(例如,支持新功能),为现有的MTL模式增加新任务通常需要从零到零的再培训模式,这可能会耗费时间和计算成本。此外,在某些情景中,培训原始培训所用的数据可能不再可用,例如由于储存或隐私问题。在本文件中,我们处理逐步扩大MTL模式能力的问题,以便通过将已经受过训练的n任务模式的知识转化为解决n+1任务的新模式,逐步解决新任务。为了避免灾难性的忘记,我们提议利用旧任务分布中未加标签的数据。我们在公开的基准方面的实验显示,通过保存已经获得的知识(即防止高达20%的MTL模式能力逐步扩大,以便随着时间推移而解决新任务。在老任务中,我们通过不断增加的老任务中改进的进度显示我们的实际业绩。