This paper considers deep visual recognition on long-tailed data. To be general, we consider two applied scenarios, \ie, deep classification and deep metric learning. Under the long-tailed data distribution, the majority classes (\ie, tail classes) only occupy relatively few samples and are prone to lack of within-class diversity. A radical solution is to augment the tail classes with higher diversity. To this end, we introduce a simple and reliable method named Memory-based Jitter (MBJ). We observe that during training, the deep model constantly changes its parameters after every iteration, yielding the phenomenon of \emph{weight jitters}. Consequentially, given a same image as the input, two historical editions of the model generate two different features in the deeply-embedded space, resulting in \emph{feature jitters}. Using a memory bank, we collect these (model or feature) jitters across multiple training iterations and get the so-called Memory-based Jitter. The accumulated jitters enhance the within-class diversity for the tail classes and consequentially improves long-tailed visual recognition. With slight modifications, MBJ is applicable for two fundamental visual recognition tasks, \emph{i.e.}, deep image classification and deep metric learning (on long-tailed data). Extensive experiments on five long-tailed classification benchmarks and two deep metric learning benchmarks demonstrate significant improvement. Moreover, the achieved performance are on par with the state of the art on both tasks.
翻译:本文考虑对长尾数据的深度视觉识别。 一般来说, 我们考虑的是两种应用情景, \ \ \, 深分类和深度量度学习。 在长尾数据分布中, 多数类( \ i, 尾类) 只占相对较少的样本, 并且容易缺少类内多样性。 一个根本的解决方案是增加尾类, 多样性更大。 为此, 我们引入了一个简单而可靠的方法, 名为“ 内存吉他 ” (MBJ) 。 我们观察到, 在培训期间, 深型模型在每次迭代后不断改变其参数, 产生 \ emph{ 重量性急症现象。 因此, 以与输入相同的图像, 多数类( \ i, 尾类, 尾类( 尾类, 尾类, 尾类, 尾类, 尾类, 尾类, 尾类) 产生类似现象。 因此, 两种历史版本产生两种不同的特征特征特征特征, 。 通过记忆分类, 进行微小的修改, 和深层的图像分类 。