Foundation models are redefining how AI systems are built. Practitioners now follow a standard procedure to build their machine learning solutions: download a copy of a foundation model, and fine-tune it using some in-house data about the target task of interest. Consequently, the Internet is swarmed by a handful of foundation models fine-tuned on many diverse tasks. Yet, these individual fine-tunings often lack strong generalization and exist in isolation without benefiting from each other. In our opinion, this is a missed opportunity, as these specialized models contain diverse features. Based on this insight, we propose model recycling, a simple strategy that leverages multiple fine-tunings of the same foundation model on diverse auxiliary tasks, and repurposes them as rich and diverse initializations for the target task. Specifically, model recycling fine-tunes in parallel each specialized model on the target task, and then averages the weights of all target fine-tunings into a final model. Empirically, we show that model recycling maximizes model diversity by benefiting from diverse auxiliary tasks, and achieves a new state of the art on the reference DomainBed benchmark for out-of-distribution generalization. Looking forward, model recycling is a contribution to the emerging paradigm of updatable machine learning where, akin to open-source software development, the community collaborates to incrementally and reliably update machine learning models.
翻译:基础模型正在重新定义如何构建AI系统。 实践者现在遵循标准程序来构建他们的机器学习解决方案: 下载一个基础模型的复制件, 并使用一些关于目标任务的内部数据对它进行微调。 因此, 互联网被一小撮基础模型扭曲, 对许多不同任务进行微调。 然而, 这些个别的微调往往缺乏强有力的概括, 孤立地存在, 而没有相互受益。 我们认为, 这是一个错失的机会, 因为这些专门模型包含不同的特征。 基于这一洞察力, 我们提出模型回收, 一种利用不同辅助任务对同一基础模型进行多重微调的简单战略, 并且将它们重新定位为目标任务的丰富而多样的初始化。 具体地说, 每个专门模型的微调是平行的, 与目标任务平行的每个专门模型同步, 然后将所有目标微调调整的重量平均到最终模型的重量。 我们认为, 模型回收利用了多种多样的特性。 基于不同的辅助任务, 我们提出了模型回收, 并取得了关于参考 DOmain Best 样模型的艺术的新状态, 将它重新定位作为目标的参考, 向前级的模型更新, 向前向正在学习正在逐步更新的系统化的模型学习。