In this manuscript, we show that any neural network with any activation function can be represented as a decision tree. The representation is equivalence and not an approximation, thus keeping the accuracy of the neural network exactly as is. We believe that this work provides better understanding of neural networks and paves the way to tackle their black-box nature. We share equivalent trees of some neural networks and show that besides providing interpretability, tree representation can also achieve some computational advantages for small networks. The analysis holds both for fully connected and convolutional networks, which may or may not also include skip connections and/or normalizations.
翻译:在此手稿中,我们显示任何具有任何激活功能的神经网络都可以作为决策树来代表。 代表是等效的,而不是近似的, 从而保持神经网络的准确性。 我们相信, 这项工作可以使人们更好地了解神经网络, 并铺平解决其黑盒性质的道路。 我们分享一些神经网络的等效树, 并表明除了提供可解释性外, 树木代表也可以为小型网络获得一些计算优势。 分析既包括完全连接的网络,也包括革命网络,其中可能也包括跳过连接和(或)正常化。