We investigate the problem of reducing mistake severity for fine-grained classification. Fine-grained classification can be challenging, mainly due to the requirement of knowledge or domain expertise for accurate annotation. However, humans are particularly adept at performing coarse classification as it requires relatively low levels of expertise. To this end, we present a novel approach for Post-Hoc Correction called Hierarchical Ensembles (HiE) that utilizes label hierarchy to improve the performance of fine-grained classification at test-time using the coarse-grained predictions. By only requiring the parents of leaf nodes, our method significantly reduces avg. mistake severity while improving top-1 accuracy on the iNaturalist-19 and tieredImageNet-H datasets, achieving a new state-of-the-art on both benchmarks. We also investigate the efficacy of our approach in the semi-supervised setting. Our approach brings notable gains in top-1 accuracy while significantly decreasing the severity of mistakes as training data decreases for the fine-grained classes. The simplicity and post-hoc nature of HiE render it practical to be used with any off-the-shelf trained model to improve its predictions further.
翻译:我们调查了降低微粒分类误差严重程度的问题。 精细分类可能具有挑战性, 主要原因是需要知识或域域专长才能准确批注。 然而, 人类特别擅长进行粗粗分类, 因为它需要相对较低的专门知识水平。 为此, 我们为院后校正提出了一种新办法, 称为“ 高层次编组 ” ( HiE ), 利用标签等级来利用粗粒预测来改进试验时精细分类的绩效。 我们的方法仅要求叶节点的父母, 才能大大降低错误严重性, 同时提高 inaturallist-19 和 分层编辑ImaageNet-H 数据集的一级精确性, 在这两个基准上都达到新的水平。 我们还调查了我们在半封闭环境中的方法的有效性。 我们的方法在顶层-1 精确性方面带来了显著的收益,同时大大降低了错误的严重性,因为对精细类的培训数据减少。 HIE 的简单性和后 性质使得 HIE 的预测得到进一步使用。