Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention. For CD-FSL, recent studies generally have developed transfer learning based approaches that pre-train a neural network on popular labeled source domain datasets and then transfer it to target domain data. Although the labeled datasets may provide suitable initial parameters for the target data, the domain difference between the source and target might hinder the fine-tuning on the target domain. This paper proposes a simple yet powerful method that re-randomizes the parameters fitted on the source domain before adapting to the target data. The re-randomization resets source-specific parameters of the source pre-trained model and thus facilitates fine-tuning on the target domain, improving few-shot performance.
翻译:对于CD-FSL来说,最近的研究通常已经开发了基于学习的转让方法,在流行的有标签的源域数据集上对神经网络进行预先培训,然后将其转移到目标域数据。尽管标签数据集可能为目标数据提供适当的初始参数,但源和目标之间的域差异可能会妨碍目标域的微调。本文提出了一个简单而有力的方法,在调整源域的参数之前,在调整目标数据之前,将源域的参数重新重新重新重新排列。重新随机化源源特定参数是源预先培训的模型,从而便利目标域的微调,改进微小的性能。