Deep neural networks have made huge progress in the last few decades. However, as the real-world data often exhibits a long-tailed distribution, vanilla deep models tend to be heavily biased toward the majority classes. To address this problem, state-of-the-art methods usually adopt a mixture of experts (MoE) to focus on different parts of the long-tailed distribution. Experts in these methods are with the same model depth, which neglects the fact that different classes may have different preferences to be fit by models with different depths. To this end, we propose a novel MoE-based method called Self-Heterogeneous Integration with Knowledge Excavation (SHIKE). We first propose Depth-wise Knowledge Fusion (DKF) to fuse features between different shallow parts and the deep part in one network for each expert, which makes experts more diverse in terms of representation. Based on DKF, we further propose Dynamic Knowledge Transfer (DKT) to reduce the influence of the hardest negative class that has a non-negligible impact on the tail classes in our MoE framework. As a result, the classification accuracy of long-tailed data can be significantly improved, especially for the tail classes. SHIKE achieves the state-of-the-art performance of 56.3%, 60.3%, 75.4%, and 41.9% on CIFAR100-LT (IF100), ImageNet-LT, iNaturalist 2018, and Places-LT, respectively.
翻译:深度神经网络在过去几十年中取得了巨大进展。然而,由于真实世界的数据通常呈现出长尾分布,普通深层模型往往严重偏向于大多数类别。为了解决这个问题,最先进的方法通常采用混合专家(MoE)来关注长尾分布的不同部分。这些方法中的专家具有相同的模型深度,忽略了不同类别可能对不同深度模型的适应性有所区别这一事实。为此,我们提出了一种基于MoE的SHIKE(Self-Heterogeneous Integration with Knowledge Excavation)方法。我们首先提出深度知识融合(DKF)方法,将每个专家网络中不同浅层部分和深层部分之间的特征融合起来,进一步增加专家之间的多样性,更好地表示不同类别。基于DKF,我们进一步提出了动态知识转移(DKT)方法,在我们的MoE框架中减少了对最难的负类的影响,这些负类对尾部类别产生了不可忽略的影响。结果,长尾数据的分类精度可以显着提高,特别是尾部类别。SHIKE在IF100、ImageNet-LT、iNaturalist 2018和Places-LT上分别获得了56.3%、60.3%、75.4%和41.9%的最优表现。