Few-shot learning (FSL) is a challenging learning problem in which only a few samples are available for each class. Decision interpretation is more important in few-shot classification since there is a greater chance of error than in traditional classification. However, most of the previous FSL methods are black-box models. In this paper, we propose an inherently interpretable model for FSL based on human-friendly attributes. Moreover, we propose an online attribute selection mechanism that can effectively filter out irrelevant attributes in each episode. The attribute selection mechanism improves the accuracy and helps with interpretability by reducing the number of participated attributes in each episode. We propose a mechanism that automatically detects the episodes where the pool of human-friendly attributes are not adequate, and compensates by engaging learned unknown attributes. We demonstrate that the proposed method achieves results on par with black-box few-shot-learning models on four widely used datasets.
翻译:少样本学习(FSL)是一种具有挑战性的学习问题,每个类别仅有很少的样本可用。决策解释在少样本分类中比传统分类更为重要,因为误差的可能性更大。然而,大多数先前的FSL方法都是黑盒模型。在本文中,我们提出了一种基于人类友好属性的内在可解释模型用于FSL。此外,我们提出了一种在线属性选择机制,可以有效地在每个episode中过滤掉不相关的属性。属性选择机制通过减少每个episode中参与的属性数量来提高准确性并帮助解释。我们提出了一种机制,可以自动检测池中人类友好属性不足的episode,并通过参与学习的未知属性来补偿。我们展示了该方法在四个广泛使用的数据集上的结果与黑盒少样本学习模型相媲美。