Few-shot learning (FSL) techniques seek to learn the underlying patterns in data using fewer samples, analogous to how humans learn from limited experience. In this limited-data scenario, the challenges associated with deep neural networks, such as shortcut learning and texture bias behaviors, are further exacerbated. Moreover, the significance of addressing shortcut learning is not yet fully explored in the few-shot setup. To address these issues, we propose LSFSL, which enforces the model to learn more generalizable features utilizing the implicit prior information present in the data. Through comprehensive analyses, we demonstrate that LSFSL-trained models are less vulnerable to alteration in color schemes, statistical correlations, and adversarial perturbations leveraging the global semantics in the data. Our findings highlight the potential of incorporating relevant priors in few-shot approaches to increase robustness and generalization.
翻译:少样本学习技术旨在利用更少的样本学习数据中的潜在模式,类似于人类从有限经验中学习的方式。在这种有限数据的情况下,深度神经网络面临的问题,如快速学习和纹理偏差行为,进一步恶化。此外,在少样本设置中解决快速学习问题的重要性尚未得到充分探讨。为了解决这些问题,我们提出了 LSFSL,它通过利用数据中的隐式先验信息强制模型学习更可泛化的特征。通过全面的分析,我们证明了 LSFSL 训练模型在利用数据中的全局语义方面更加鲁棒和可泛化,不易受到颜色方案、统计相关性和对抗扰动的改变的影响。我们的发现强调了将相关先验因素纳入少样本方法以提高鲁棒性和泛化性的潜力。