Negative sampling schemes enable efficient training given a large number of classes, by offering a means to approximate a computationally expensive loss function that takes all labels into account. In this paper, we present a new connection between these schemes and loss modification techniques for countering label imbalance. We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels. Further, we provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance. We empirically verify our findings on long-tail classification and retrieval benchmarks.
翻译:消极抽样办法使大量类别能够进行有效的培训,办法是提供一种方法来估计一种考虑到所有标签的计算成本昂贵的损失功能。在本文件中,我们介绍了这些方案与消除标签不平衡的修改损失技术之间的新联系。我们表明,不同的负面抽样办法隐含了主要标签与稀有标签之间的权衡效果。此外,我们提供了一种统一办法,以明确解决由于与所有标签中的一组标签合作而产生的抽样偏差,以及由于标签不平衡而成为数据内在的标签偏差。我们用经验核查了我们关于长尾分类和检索基准的调查结果。