Treating class with a single center may hardly capture data distribution complexities. Using multiple sub-centers is an alternative way to address this problem. However, highly correlated sub-classes, the classifier's parameters grow linearly with the number of classes, and lack of intra-class compactness are three typical issues that need to be addressed in existing multi-subclass methods. To this end, we propose to use Fixed Sub-Center (F-SC), which allows the model to create more discrepant sub-centers while saving memory and cutting computational costs considerably. The F-SC specifically, first samples a class center Ui for each class from a uniform distribution, and then generates a normal distribution for each class, where the mean is equal to Ui. Finally, the sub-centers are sampled based on the normal distribution corresponding to each class, and the sub-centers are fixed during the training process avoiding the overhead of gradient calculation. Moreover, F-SC penalizes the Euclidean distance between the samples and their corresponding sub-centers, it helps remain intra-compactness. The experimental results show that F-SC significantly improves the accuracy of both image classification and fine-grained recognition tasks.
翻译:以单一中心处理类可能很难捕捉数据分布的复杂性。 使用多个子中心是解决这一问题的替代方法。 然而, 高度关联的子类, 分类器的参数随着分类数量而线性增长, 以及缺乏类内紧凑性是三个典型的问题, 需要在现有多子类方法中加以解决。 为此, 我们提议使用固定子中心( F- SC ), 使模型能够创建更相异的子中心, 同时大量保存内存和削减计算成本。 F- SC 具体地说, 首次从统一分布中为每类采集一个类的Ui 中心, 然后为每类生成一个正常的分布, 平均值与 Ui 相同。 最后, 子中心是根据每个类对应的正常分布进行抽样抽样的, 并且在培训过程中固定了子中心, 避免梯度计算。 此外, F- SC 处罚样本与其对应的子中心之间的 Euclideidean 距离, 它有助于保持内部的精确性。 实验结果表明, F- SC SC 任务大大改进了图像分类和图像的准确性 。