Standard neural networks struggle to generalize under distribution shifts in computer vision. Fortunately, combining multiple networks can consistently improve out-of-distribution generalization. In particular, weight averaging (WA) strategies were shown to perform best on the competitive DomainBed benchmark; they directly average the weights of multiple networks despite their nonlinearities. In this paper, we propose Diverse Weight Averaging (DiWA), a new WA strategy whose main motivation is to increase the functional diversity across averaged models. To this end, DiWA averages weights obtained from several independent training runs: indeed, models obtained from different runs are more diverse than those collected along a single run thanks to differences in hyperparameters and training procedures. We motivate the need for diversity by a new bias-variance-covariance-locality decomposition of the expected error, exploiting similarities between WA and standard functional ensembling. Moreover, this decomposition highlights that WA succeeds when the variance term dominates, which we show occurs when the marginal distribution changes at test time. Experimentally, DiWA consistently improves the state of the art on DomainBed without inference overhead.
翻译:幸运的是,将多个网络结合起来,就能不断改善分布范围外的通用化。 特别是,平均加权(WA)战略在竞争性域域比基准中表现最佳;尽管非线性,它们直接平均多个网络的重量。 在本文中,我们提出多元 Weight Averaging(DiWA),这是一个新的WA战略,其主要动机是增加平均模型的功能多样性。为此,DiWA平均加权数来自几个独立的培训运行:事实上,由于超参数和培训程序的差异,从不同运行中获得的模型比单运行中收集的模型更加多样化。我们利用WA和标准功能组合之间的相似性,通过新的偏差-差异-相异性-地方性分解预期错误来激励多样性的需要。此外,这一分解突出表明,当差异性术语在平均模式中占主导地位时,WA会成功,我们在测试时会显示这一点。 实验中,DiWA不断通过在试验时,不断改善DoneBeame的艺术状况,而不在间接上。