This paper addresses an important problem of ranking the pre-trained deep neural networks and screening the most transferable ones for downstream tasks. It is challenging because the ground-truth model ranking for each task can only be generated by fine-tuning the pre-trained models on the target dataset, which is brute-force and computationally expensive. Recent advanced methods proposed several lightweight transferability metrics to predict the fine-tuning results. However, these approaches only capture static representations but neglect the fine-tuning dynamics. To this end, this paper proposes a new transferability metric, called \textbf{S}elf-challenging \textbf{F}isher \textbf{D}iscriminant \textbf{A}nalysis (\textbf{SFDA}), which has many appealing benefits that existing works do not have. First, SFDA can embed the static features into a Fisher space and refine them for better separability between classes. Second, SFDA uses a self-challenging mechanism to encourage different pre-trained models to differentiate on hard examples. Third, SFDA can easily select multiple pre-trained models for the model ensemble. Extensive experiments on $33$ pre-trained models of $11$ downstream tasks show that SFDA is efficient, effective, and robust when measuring the transferability of pre-trained models. For instance, compared with the state-of-the-art method NLEEP, SFDA demonstrates an average of $59.1$\% gain while bringing $22.5$x speedup in wall-clock time. The code will be available at \url{https://github.com/TencentARC/SFDA}.
翻译:本文涉及对经过训练的深神经网络进行排名和为下游任务筛选最可转移的神经网络的重要问题。 之所以具有挑战性,是因为每项任务的地面真实模型排名只能通过对目标数据集的预培训模型进行微调才能产生, 目标数据集是粗力和计算成本昂贵的。 最近先进的方法提出了若干轻量可转移指标, 以预测微调结果。 但是, 这些方法只能捕捉静的表达方式, 却忽视微调动态。 为此, 本文提出了一个新的可转移性指标, 叫做\ textbf{Self- challenging 。 称之为\ textbf{F} Fisher\\ textb{D} 只有微调目标数据集的预培训模型才能产生。 SFDA 平均值显示当前工程没有的许多令人兴奋的好处。 SFDA 将固定的特性嵌入一个渔业空间, 并改进这些特性, 用于更精确的等级。 第二, SFDA 的自我挑战性机制鼓励不同的预训练模型在硬度模型上区分。