Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, instead of generating post-hoc explanations that approximate a trained model. However, a large number of prototypes can be overwhelming. To reduce explanation size and improve interpretability, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in an interpretable decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in the tree explains a single prediction. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird! We tune the accuracy-interpretability trade-off using ensembling and pruning. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200-2011 and Stanford Cars data sets. Code is available at https://github.com/M-Nauta/ProtoTree
翻译:解释机器学习会处理深神经网络的黑箱性质。 视觉原型已被建议用于内在可解释的图像识别, 而不是产生与经过训练的模型相近的热后解释。 然而, 大量原型可能压倒性。 为了减少解释大小并改进解释性, 我们提议神经原型树( ProtoTree), 这是一种深层次的学习方法, 包括可解释决策树中的原型, 以忠实地直观整个模型。 除了全球可解释性外, 树中的一条路径解释一个单一的预测。 我们的二进制树的每个节点都包含一个可训练的原型部分。 图像中的原型的存在或不存在决定着通过节点的路径。 因此, 决策与人类推理相似: 鸟有红喉吗? 长尾巴?? 然后是一只蜂鸟! 我们使用编织和绘图来调准确的可解释性交易交易。 我们使用不牺牲精确性的方法, 导致小树上只有8个原型, 沿着一条路径将鸟从200个物种分类/ Pro200号/ ROQreto 实现数据的标准化。