Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
翻译:测试前显示通过计算、 数据大小和数据多样性来进行比例化。 多任务学习火车在一组受监管的数据集上进行整合, 并比自监管的预培训流程产生更好的性能。 到目前为止, 大规模多重任务学习需要同时访问混合物中的所有数据集, 以及只有资源丰富的团队才能获得的大量计算资源。 在本文中, 我们提议了 ColD Fusion, 这个方法可以提供多任务学习的好处, 但杠杆分布计算, 并且需要有限的通信和不分享数据。 因此, ColD Fusion 能够创建一个协同循环, 与自监管的预培训模式相比, 从而可以再利用微调模型来不断改进它们所依据的预培训模式。 我们显示, ColD Fusion 生成了一个模型, 能够同时同时获取多任务预培训中的所有数据集, 并且大量计算资源, 而只有资源丰富的计算资源, 并且只有资源丰富的团队才能使用。 我们提议了 ColD Fusion 组合, 这个方法可以提供多任务学习的好处, 但它是一个更好的起点, 。 我们发现 ColD Fusion 超越 RoBERTA 甚至是以前的多任务模型。 具体而言, 当在35个不同的模型上培训和测试时, 没有平均结构中, 。