Few-shot learning (FSL) is a challenging learning problem in which only a few samples are available for each class. Decision interpretation is more important in few-shot classification since there is a greater chance of error than in traditional classification. However, most of the previous FSL methods are black-box models. In this paper, we propose an inherently interpretable model for FSL based on human-friendly attributes. Moreover, we propose an online attribute selection mechanism that can effectively filter out irrelevant attributes in each episode. The attribute selection mechanism improves the accuracy and helps with interpretability by reducing the number of participated attributes in each episode. We demonstrate that the proposed method achieves results on par with black-box few-shot-learning models on four widely used datasets. To further close the performance gap with the black-box models, we propose a mechanism that trades interpretability for accuracy. It automatically detects the episodes where the provided human-friendly attributes are not adequate, and compensates by engaging learned unknown attributes.
翻译:少见的学习(FSL)是一个具有挑战性的学习问题,每个班级只有几个样本。决定解释在短短的分类中更为重要,因为误差的可能性比传统分类大。然而,以前FSL的方法大多是黑盒模型。在本文中,我们提议了一个基于人类友好属性的FSL内在可解释的模式。此外,我们提议了一个在线属性选择机制,可以有效地筛选每一集的无关属性。属性选择机制通过减少每集的参与属性的数量来提高准确性并有助于解释性。我们证明,拟议方法在四种广泛使用的数据集上与黑盒少见的学习模型取得相同的结果。为了进一步缩小黑盒模型的性能差距,我们提议了一个机制,在准确性上进行交易。它自动检测提供的人友好属性不够充分的情况,并通过利用已知的未知属性进行补偿。