Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples. While literature has demonstrated great successes via representation learning, in this work, we show that substantial performance improvement of downstream tasks can also be achieved by appropriate designs of the adaptation process. Specifically, we propose a modular adaptation method that selectively performs multiple state-of-the-art (SOTA) adaptation methods in sequence. As different downstream tasks may require different types of adaptation, our modular adaptation enables the dynamic configuration of the most suitable modules based on the downstream task. Moreover, as an extension to existing cross-domain 5-way k-shot benchmarks (e.g., miniImageNet -> CUB), we create a new high-way (~100) k-shot benchmark with data from 10 different datasets. This benchmark provides a diverse set of domains and allows the use of stronger representations learned from ImageNet. Experimental results show that by customizing adaptation process towards downstream tasks, our modular adaptation pipeline (MAP) improves 3.1% in 5-shot classification accuracy over baselines of finetuning and Prototypical Networks.
翻译:在这项工作中,虽然文献表明通过代表性学习取得了巨大成功,但我们表明,通过适当设计适应进程,也可以大大改进下游任务的业绩。具体地说,我们建议一种模块适应方法,有选择地按顺序执行多种最先进的适应方法。由于不同的下游任务可能需要不同类型的适应,我们的模块适应使基于下游任务的最合适的模块能够动态地配置。此外,作为现有跨门5道K-shot基准(例如,微型IMageNet - > CUB)的延伸,我们创建了一条新的高路(~100 kshot基准),有10个不同的数据集数据。这一基准提供了一套不同的领域,并允许使用从图像网中学习的更强有力的表述。实验结果显示,通过定制适应进程,我们的模块适应管道(MAP)通过对下游任务进行定制,在微调和普罗托非典型网络基线的5分解精度方面提高了3.1%。