Deep Neural Networks use thousands of mostly incomprehensible features to identify a single class, a decision no human can follow. We propose an interpretable sparse and low dimensional final decision layer in a deep neural network with measurable aspects of interpretability and demonstrate it on fine-grained image classification. We argue that a human can only understand the decision of a machine learning model, if the features are interpretable and only very few of them are used for a single decision. For that matter, the final layer has to be sparse and, to make interpreting the features feasible, low dimensional. We call a model with a Sparse Low-Dimensional Decision SLDD-Model. We show that a SLDD-Model is easier to interpret locally and globally than a dense high-dimensional decision layer while being able to maintain competitive accuracy. Additionally, we propose a loss function that improves a model's feature diversity and accuracy. Our more interpretable SLDD-Model only uses 5 out of just 50 features per class, while maintaining 97% to 100% of the accuracy on four common benchmark datasets compared to the baseline model with 2048 features.
翻译:深度神经网络使用数千个大多数无法理解的特征来识别单个类别,这是任何人都无法跟随的决策。我们提出了在深度神经网络中具有可测量可解释性方面的可解释稀疏低维最终决策层,并在精细的图像分类上进行演示。我们认为,只有特征可解释且只使用极少数特征来做出单个决策,人才能理解机器学习模型的决策。为此,最终层必须是稀疏的,并且为了使特征解释起来可行,必须是低维的。我们称具有稀疏低维决策的模型为SLDD-模型。我们表明,与密集的高维度决策层相比,以SLDD-模型为代表的更具有可解释性的模型在本地和全局上更易于解释,同时能够保持竞争性的准确性。此外,我们提出了一种损失函数,可以提高模型的特征多样性和准确性。我们的更具可解释性的SLDD-模型仅使用每个类别50个特征中的5个,与基线模型相比,在四个常见基准数据集上仍然保持97%到100%的准确性。