Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside. In this work, we demonstrate that there can exist a downside after all: bias transfer, or the tendency for biases of the source model to persist even after adapting the model to the target class. Through a combination of synthetic and natural experiments, we show that bias transfer both (a) arises in realistic settings (such as when pre-training on ImageNet or other standard datasets) and (b) can occur even when the target dataset is explicitly de-biased. As transfer-learned models are increasingly deployed in the real world, our work highlights the importance of understanding the limitations of pre-trained source models. Code is available at https://github.com/MadryLab/bias-transfer
翻译:利用转让学习使经过预先训练的“ 源模式” 适应下游“ 目标任务” 的“ 源模式” 应用转让学习可以显著提高业绩,但似乎没有任何下行。 在这项工作中,我们证明,毕竟可以存在一个下行:偏向转移,或者即使在将模式调整到目标类别后源模式的偏向趋势仍会持续存在。我们通过合成和自然实验的结合,表明偏向转移:(a) 发生在现实环境中(例如,在图像网络或其他标准数据集的预培训中)和(b) 即使目标数据集被明确去除偏向,也可能发生偏向。随着转移-学习模式越来越多地被运用在现实世界中,我们的工作强调了理解预先训练的源模式局限性的重要性。 守则可在 https://github.com/MadryLab/bis- transport查阅 https://github. com/MadryLab/bas- transport查阅。