It is widely accepted in the mode connectivity literature that when two neural networks are trained similarly on the same data, they are connected by a path through parameter space over which test set accuracy is maintained. Under some circumstances, including transfer learning from pretrained models, these paths are presumed to be linear. In contrast to existing results, we find that among text classifiers (trained on MNLI, QQP, and CoLA), some pairs of finetuned models have large barriers of increasing loss on the linear paths between them. On each task, we find distinct clusters of models which are linearly connected on the test loss surface, but are disconnected from models outside the cluster -- models that occupy separate basins on the surface. By measuring performance on specially-crafted diagnostic datasets, we find that these clusters correspond to different generalization strategies: one cluster behaves like a bag of words model under domain shift, while another cluster uses syntactic heuristics. Our work demonstrates how the geometry of the loss surface can guide models towards different heuristic functions.
翻译:模式连接文献普遍认为,当两个神经网络在相同的数据上接受类似的培训时,它们通过一个通过参数空间的路径连接起来,而该参数空间的测试集的准确性得到维持。在某些情况下,包括从预先培训的模型中转移学习,这些路径被推定为线性。与现有的结果相反,我们发现,在文本分类者(在MNLI、QP和COLA上受过培训)中,有些微调模型对彼此的线性路径造成更大的损失。在每项任务中,我们发现不同的模型群群群,这些模型在测试损失表面上线性连接,但与组群以外的模型脱节 -- -- 占地表不同盆地的模型。通过测量专门设计的诊断数据集的性能,我们发现这些组群与不同的一般化战略相对应:一个组群群在域转移下表现像一袋文字模型,而另一个组群群则使用合成性外观。我们的工作表明,损失表面的几何测量方法可以引导模型走向不同的超度功能。