Data augmentation for minority classes is an effective strategy for long-tailed recognition, thus developing a large number of methods. Although these methods all ensure the balance in sample quantity, the quality of the augmented samples is not always satisfactory for recognition, being prone to such problems as over-fitting, lack of diversity, semantic drift, etc. For these issues, we propose the Class-aware Universum Inspired Re-balance Learning(CaUIRL) for long-tailed recognition, which endows the Universum with class-aware ability to re-balance individual minority classes from both sample quantity and quality. In particular, we theoretically prove that the classifiers learned by CaUIRL are consistent with those learned under the balanced condition from a Bayesian perspective. In addition, we further develop a higher-order mixup approach, which can automatically generate class-aware Universum(CaU) data without resorting to any external data. Unlike the traditional Universum, such generated Universum additionally takes the domain similarity, class separability, and sample diversity into account. Extensive experiments on benchmark datasets demonstrate the surprising advantages of our method, especially the top1 accuracy in minority classes is improved by 1.9% 6% compared to the state-of-the-art method.
翻译:虽然这些方法都确保了抽样数量的平衡,但增加的样本的质量并不总是令人满意,难以得到承认,容易遇到过度装配、缺乏多样性、语义漂移等问题。 对于这些问题,我们提议长期承认具有觉悟的大学激发的重新平衡学习(CaUIRL),使大学具有从抽样数量和质量两方面重新平衡个别少数群体类别的能力。特别是,我们理论上证明,CaUIRL所学的分类者与在巴伊西亚角度均衡条件下所学的分类者不总是一致。此外,我们进一步制定更高级的混合方法,可以自动生成有觉悟的大学再平衡学习(CaUU)数据,而不必使用任何外部数据。与传统的大学不同,这种生成的大学还利用了领域相似性、等级可变性和抽样多样性,特别是从理论上证明,CAUIRL所学的分类方法与在巴伊西亚的均衡条件下所学的分类一致。此外,在基准数据组中进行的广泛实验,通过比对少数群体方法的顶级(最高级)的精确度,展示了我们方法中最高级(最高级)的精确度的优势。