Model bias triggered by long-tailed data has been widely studied. However, measure based on the number of samples cannot explicate three phenomena simultaneously: (1) Given enough data, the classification performance gain is marginal with additional samples. (2) Classification performance decays precipitously as the number of training samples decreases when there is insufficient data. (3) Model trained on sample-balanced datasets still has different biases for different classes. In this work, we define and quantify the semantic scale of classes, which is used to measure the feature diversity of classes. It is exciting to find experimentally that there is a marginal effect of semantic scale, which perfectly describes the first two phenomena. Further, the quantitative measurement of semantic scale imbalance is proposed, which can accurately reflect model bias on multiple datasets, even on sample-balanced data, revealing a novel perspective for the study of class imbalance. Due to the prevalence of semantic scale imbalance, we propose semantic-scale-balanced learning, including a general loss improvement scheme and a dynamic re-weighting training framework that overcomes the challenge of calculating semantic scales in real-time during iterations. Comprehensive experiments show that dynamic semantic-scale-balanced learning consistently enables the model to perform superiorly on large-scale long-tailed and non-long-tailed natural and medical datasets, which is a good starting point for mitigating the prevalent but unnoticed model bias.
翻译:长尾数据引起的模型偏差已得到广泛研究。然而,基于样本数量的度量无法同时解释以下三个现象:(1)足够的数据时,分类性能的增益随着额外样本的增加而微不足道;(2)当数据不足时,随着训练样本数量的减少,分类性能急剧下降;(3)即使在样本平衡的数据集上训练的模型对不同类别仍然具有不同的偏差。在这项工作中,我们定义并量化类别的语义规模,用于衡量类别的特征多样性。有趣的是,实验发现语义规模的效应微不足道,可以完美地描述前两个现象。此外,我们提出了量化语义规模不平衡的方法,可以准确反映模型在多个数据集上的偏差,甚至可以在样本平衡的数据上揭示一种新的类别不平衡研究视角。由于语义规模不平衡的普遍存在,我们提出了语义规模平衡学习,包括普遍的损失改进方案和动态重新加权训练框架,克服了在迭代过程中实时计算语义规模的挑战。综合实验表明,动态语义规模平衡学习不断提高了模型在大规模长尾和非长尾自然和医学数据集上的性能,为缓解普遍但未被注意的模型偏差开了一个好头。