Few-Shot Learning (FSL) algorithms are commonly trained through Meta-Learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation. However, the standard training procedures overlook the real-world dynamics where classes commonly occur at different frequencies. While it is generally understood that class imbalance harms the performance of supervised methods, limited research examines the impact of imbalance on the FSL evaluation task. Our analysis compares 10 state-of-the-art meta-learning and FSL methods on different imbalance distributions and rebalancing techniques. Our results reveal that 1) some FSL methods display a natural disposition against imbalance while most other approaches produce a performance drop by up to 17\% compared to the balanced task without the appropriate mitigation; 2) contrary to popular belief, many meta-learning algorithms will not automatically learn to balance from exposure to imbalanced training tasks; 3) classical rebalancing strategies, such as random oversampling, can still be very effective, leading to state-of-the-art performances and should not be overlooked; 4) FSL methods are more robust against meta-dataset imbalance than imbalance at the task-level with a similar imbalance ratio ($\rho<20$), with the effect holding even in long-tail datasets under a larger imbalance ($\rho=65$).
翻译:低热学习算法通常通过Meta-Learning(ML)来培训,这种算法将模型暴露在从元数据集抽样的一组任务中,从元数据集到模拟评价期间所看到的任务中,但标准培训程序忽略了通常不同频率班级通常发生的真实世界动态;虽然一般认为班级不平衡会损害监督方法的绩效,但研究有限,研究范围有限,研究不平衡对FSL评价任务的影响。我们的分析比较了10种最先进的元学习和FSL方法对不同不平衡分布和再平衡技术的影响。我们的结果表明:(1) 某些FSL方法对不平衡的分布和再平衡技术表现出自然的处理方式,而大多数其他方法与平衡的任务相比,业绩下降幅度高达17 ⁇ ;(2) 与流行的信念相反,许多元学习算法不会自动学会平衡接触不平衡的培训任务;(3) 典型的重新平衡战略,如随机过度抽查,仍然可以非常有效,导致最先进的业绩和不应被忽视。