While a broad range of techniques have been proposed to tackle distribution shift, the simple baseline of training on an $\textit{undersampled}$ balanced dataset often achieves close to state-of-the-art-accuracy across several popular benchmarks. This is rather surprising, since undersampling algorithms discard excess majority group data. To understand this phenomenon, we ask if learning is fundamentally constrained by a lack of minority group samples. We prove that this is indeed the case in the setting of nonparametric binary classification. Our results show that in the worst case, an algorithm cannot outperform undersampling unless there is a high degree of overlap between the train and test distributions (which is unlikely to be the case in real-world datasets), or if the algorithm leverages additional structure about the distribution shift. In particular, in the case of label shift we show that there is always an undersampling algorithm that is minimax optimal. In the case of group-covariate shift we show that there is an undersampling algorithm that is minimax optimal when the overlap between the group distributions is small. We also perform an experimental case study on a label shift dataset and find that in line with our theory, the test accuracy of robust neural network classifiers is constrained by the number of minority samples.
翻译:虽然提出了处理分配变化的广泛技术,但关于美元/textit{undersamped}$平衡的数据集的简单培训基线往往接近于几个流行基准的先进技术准确度。这相当令人吃惊,因为低抽样算法抛弃了超多数群体的数据。为了理解这一现象,我们询问学习是否受到缺乏少数群体样本的根本性限制。我们证明这确实是非参数二进制分类设置中的情况。我们的结果显示,在最坏的情况下,除非火车和测试分布之间存在高度的重叠(在现实世界数据集中不大可能出现这种情况),否则算法不可能超过最差的抽样,或者如果算法利用了分配变化的更多结构。特别是,在标签转换方面,我们发现始终存在着一种最理想的低抽样算法。在组分布重叠时,最差的算法是小的,除非火车和测试分布的高度重叠(在现实世界数据集组数据集中不可能出现),或者如果算法利用分配变化的更多结构。我们还通过精确性模型对标签的精度进行实验性测试,我们找到了一个小的模型。