Freezing the pre-trained backbone has become a standard paradigm to avoid overfitting in few-shot segmentation. In this paper, we rethink the paradigm and explore a new regime: {\em fine-tuning a small part of parameters in the backbone}. We present a solution to overcome the overfitting problem, leading to better model generalization on learning novel classes. Our method decomposes backbone parameters into three successive matrices via the Singular Value Decomposition (SVD), then {\em only fine-tunes the singular values} and keeps others frozen. The above design allows the model to adjust feature representations on novel classes while maintaining semantic clues within the pre-trained backbone. We evaluate our {\em Singular Value Fine-tuning (SVF)} approach on various few-shot segmentation methods with different backbones. We achieve state-of-the-art results on both Pascal-5$^i$ and COCO-20$^i$ across 1-shot and 5-shot settings. Hopefully, this simple baseline will encourage researchers to rethink the role of backbone fine-tuning in few-shot settings. The source code and models will be available at \url{https://github.com/syp2ysy/SVF}.
翻译:在本文中,我们重新思考了模式,并探索了一种新的制度:我们提出了克服过于适应问题的解决方案,导致在学习新课程中采用更好的模式化概括化。我们的方法通过Singulal值分解(SVD)将主干参数分解成三个连续的矩阵,然后只是微调单值,使其他值保持冻结。上述设计允许模型调整新颖课程的特写表,同时在预修骨干中保留语义提示。我们评估了我们用不同脊椎的微调(SVF)方法。我们通过Sincal-5$i和CO-20$i在1个和5个截图设置中都取得了最新水平的结果。希望,这一简单基准将鼓励研究人员重新思考在几个截图环境中进行主干细微调整的作用。源代码和模型将可在以下几截图环境中查阅。源代码和模型将可在 Pscal-5$i和CO-20$i。希望,这一简单基准将鼓励研究人员重新思考主干部分调整的作用。