Research on neural networks has largely focused on understanding a single model trained on a single dataset. However, relatively little is known about the relationships between different models, especially those trained or tested on different datasets. We address this by studying how the weight space and underlying loss landscape of different models are interconnected. Specifically, we demonstrate that fine-tuned models that were optimized for high performance, reside in well-defined regions in weight space, and vice versa -- that any model that resides anywhere in those regions also has high performance. Specifically, we show that language models that have been fine-tuned on the same dataset form a tight cluster in the weight space and that models fine-tuned on different datasets from the same underlying task form a looser cluster. Moreover, traversing around the region between the models reaches new models that perform comparably or even better than models found via fine-tuning, even on tasks that the original models were not fine-tuned on. Our findings provide insight into the relationships between models, demonstrating that a model positioned between two similar models can acquire the knowledge of both. We leverage this finding and design a method to pick a better model for efficient fine-tuning. Specifically, we show that starting from the center of the region is as good or better than the pre-trained model in 11 of 12 datasets and improves accuracy by 3.06 on average.
翻译:神经网络的研究主要侧重于了解一个在单一数据集方面受过训练的单一模型。然而,对于不同模型之间的关系,对于不同模型之间的关系,特别是不同模型经过培训或测试的不同数据集之间的关系,我们相对所知甚少。我们通过研究不同模型的重量空间和基本损失景观如何相互联系来解决这个问题。具体地说,我们证明,为高性能而优化的微调模型,居住在重量空间的明确界定区域,反之亦然 -- -- 位于这些区域任何地方的任何模型都具有很高的性能。具体地说,我们显示,在同一数据集上经过微调的语文模型形成一个重量空间的紧凑组合,而同一基本任务中不同数据集的模型经过微调的模型则形成一个较松散的集群。此外,在各区域之间在模型之间穿梭的新模型比通过微调找到的模型可比较或甚至更好。即使原始模型没有被微调过的任务。我们发现,这些区域中的任何模型都具有很高的性能。我们利用这一发现并设计出一个更好的模型来选择一个更好的模型,从12个模型开始的精准度改进。