The goal in label-imbalanced and group-sensitive classification is to optimize relevant metrics such as balanced error and equal opportunity. Classical methods, such as weighted cross-entropy, fail when used with the modern practice of training deep nets to the terminal phase of training(TPT), that is training beyond zero training error. This observation has motivated recent flurry of activity in developing heuristic alternatives following the intuitive mechanism of promoting larger margin for minorities. In contrast to previous heuristics, we follow a principled analysis explaining how different loss adjustments affect margins. First, we prove that for all linear classifiers trained in TPT, it is necessary to introduce multiplicative, rather than additive, logit adjustments so that the relative margins between classes change appropriately. To show this, we discover a connection of the multiplicative CE modification to the so-called cost-sensitive support-vector machines. Perhaps counterintuitively, we also find that, at the start of the training, the same multiplicative weights can actually harm the minority classes. Thus, while additive adjustments are ineffective in the TPT, we show numerically that they can speed up convergence by countering the initial negative effect of the multiplicative weights. Motivated by these findings, we formulate the vector-scaling(VS) loss, that captures existing techniques as special cases. Moreover, we introduce a natural extension of the VS-loss to group-sensitive classification, thus treating the two common types of imbalances (label/group) in a unifying way. Importantly, our experiments on state-of-the-art datasets are fully consistent with our theoretical insights and confirm the superior performance of our algorithms. Finally, for imbalanced Gaussian-mixtures data, we perform a generalization analysis, revealing tradeoffs between different metrics.
翻译:标签平衡和群体敏感分类的目标是优化相关指标,如平衡错误和平等机会。 典型的方法,如加权交叉渗透,在现代培训深网到培训结束阶段(TPT)时失败,即培训超过零培训错误。 这一观察激发了最近根据促进少数群体较大利润的直观机制开发超常替代方法的活动。 与以往的超常论不同, 我们遵循原则分析, 解释不同的损失调整对利润率有何影响。 首先, 我们证明, 对于在TPT培训的所有线性分类师来说, 有必要引入多复制性而非添加性调整, 以便让各个班之间的相对差值发生适当的变化。 为了显示这一点, 我们发现了多复制性的 CE 修改与所谓的成本敏感的支持- 视频机器之间的关联。 也许反直观地, 我们还发现, 在培训开始时, 同样的多复制性加权实际上会损害少数群体的等级。 因此, 在TPT培训中, 添加性调整是无效的, 我们从数字上显示, 将常规的精确性推算出我们两个层次的 。