We consider the problem of training a deep neural network on a given classification task, e.g., ImageNet-1K (IN1K), so that it excels at both the training task as well as at other (future) transfer tasks. These two seemingly contradictory properties impose a trade-off between improving the model's generalization and maintaining its performance on the original task. Models trained with self-supervised learning tend to generalize better than their supervised counterparts for transfer learning; yet, they still lag behind supervised models on IN1K. In this paper, we propose a supervised learning setup that leverages the best of both worlds. We extensively analyze supervised training using multi-scale crops for data augmentation and an expendable projector head, and reveal that the design of the projector allows us to control the trade-off between performance on the training task and transferability. We further replace the last layer of class weights with class prototypes computed on the fly using a memory bank and derive two models: t-ReX that achieves a new state of the art for transfer learning and outperforms top methods such as DINO and PAWS on IN1K, and t-ReX* that matches the highly optimized RSB-A1 model on IN1K while performing better on transfer tasks. Code and pretrained models: https://europe.naverlabs.com/t-rex
翻译:我们认为,在某个分类任务(例如,图像Net-1K(IN1K))上训练深心神经网络的问题是训练一个深层神经网络的问题,以便它既在培训任务方面,也在其他(未来)转让任务(未来)任务方面都很出色。这两个似乎相互矛盾的属性在改进模型的概括性与保持其原有任务的业绩之间造成权衡。接受自我监督学习培训的模式往往比受监督的同行更能概括性地进行转让学习;然而,它们仍然落后于IN1K的受监督模式。在本文中,我们提议建立一个监督的学习设置,利用两个世界的最佳力量。我们广泛分析监督的培训,使用多尺度作物进行数据增强和消耗性投影性投影仪头,并表明,投影仪的设计使我们能够控制培训任务与可转移性之间的权衡性交易。我们进一步用在飞上用记忆库计算的班级原型模型取代最后的班级重量,并得出两个模型:T-REX,在RINNO1和PAWSB1号模型上取得新的转让艺术和顶级方法,如RANO1和高级SBSBSBSBSBA前的升级。</s>