Label hierarchies are often available apriori as part of biological taxonomy or language datasets WordNet. Several works exploit these to learn hierarchy aware features in order to improve the classifier to make semantically meaningful mistakes while maintaining or reducing the overall error. In this paper, we propose a novel approach for learning Hierarchy Aware Features (HAF) that leverages classifiers at each level of the hierarchy that are constrained to generate predictions consistent with the label hierarchy. The classifiers are trained by minimizing a Jensen-Shannon Divergence with target soft labels obtained from the fine-grained classifiers. Additionally, we employ a simple geometric loss that constrains the feature space geometry to capture the semantic structure of the label space. HAF is a training time approach that improves the mistakes while maintaining top-1 error, thereby, addressing the problem of cross-entropy loss that treats all mistakes as equal. We evaluate HAF on three hierarchical datasets and achieve state-of-the-art results on the iNaturalist-19 and CIFAR-100 datasets. The source code is available at https://github.com/07Agarg/HAF
翻译:作为生物分类学或语言数据集WordNet的一部分,往往可以提供Label等级体系。一些工作利用它们来学习等级感知特性,以便改进分类方法,在维持或减少总体错误的同时,作出具有词义意义的错误。在本文件中,我们提出一种新的方法,用于学习等级感知特征(HAF),在等级等级等级中利用受限制的分类方法来生成与标签等级一致的预测。分类方法通过最大限度地减少Jensen-Shannon dvergence,通过从精细的分类器中获得的目标软标签来进行分类培训。此外,我们使用简单的几何参数损失来限制空间特征的几何测量方法来捕捉标签空间的语义结构。HAF是一种培训时间方法,在保持上一级错误的同时改进错误,从而解决将所有错误同等地处理的跨项性损失问题。我们用三个等级数据集对HAF进行了评估,并在 iNatrilist-19 和 CIFAR-100 数据设置上实现国家艺术结果。源代码可在 httphttp://FAR/HAFSqubs.