Most classification models can be considered as the process of matching templates. However, when intra-class uncertainty/variability is not considered, especially for datasets containing unbalanced classes, this may lead to classification errors. To address this issue, we propose a loss function with intra-class uncertainty following Gaussian distribution. Specifically, in our framework, the features extracted by deep networks of each class are characterized by independent Gaussian distribution. The parameters of distribution are learned with a likelihood regularization along with other network parameters. The means of the Gaussian play a similar role as the center anchor in existing methods, and the variance describes the uncertainty of different classes. In addition, similar to the inter-class margin in traditional loss functions, we introduce a margin to intra-class uncertainty to make each cluster more compact and reduce the imbalance of feature distribution from different categories. Based on MNIST, CIFAR, ImageNet, and Long-tailed CIFAR analyses, the proposed approach shows improved classification performance, through learning a better class representation.
翻译:多数分类模式可被视为匹配模板的过程。然而,如果不考虑阶级内部的不确定性/可变性,特别是包含不平衡等级的数据集,这可能导致分类错误。为了解决这一问题,我们提议在高山分布后,产生一个带有阶级内部不确定性的损失函数。具体地说,在我们的框架内,每个阶级深层网络所提取的特征的特点是独立的高山分布。分配参数与其他网络参数一样,有可能被正规化。高山的手段与现有方法的中心点作用相似,差异还描述了不同等级的不确定性。此外,与传统损失功能的阶级间差值相似,我们引入了一种对阶级内部不确定性的差值,使每个集群更加紧凑,并减少不同类别特征分布的不平衡。根据MNIST、CIFAR、图像网和长序的CIFAR分析,拟议方法通过学习更好的等级代表来显示更好的分类绩效。