The most difficult task in machine learning is to interpret trained shallow neural networks. Deep neural networks (DNNs) provide impressive results on a larger number of tasks, but it is generally still unclear how decisions are made by such a trained deep neural network. Providing feature importance is the most important and popular interpretation technique used in shallow and deep neural networks. In this paper, we develop an algorithm extending the idea of Garson Algorithm to explain Deep Belief Network based Auto-encoder (DBNA). It is used to determine the contribution of each input feature in the DBN. It can be used for any kind of neural network with many hidden layers. The effectiveness of this method is tested on both classification and regression datasets taken from literature. Important features identified by this method are compared against those obtained by Wald chi square (\c{hi}2). For 2 out of 4 classification datasets and 2 out of 5 regression datasets, our proposed methodology resulted in the identification of better-quality features leading to statistically more significant results vis-\`a-vis Wald \c{hi}2.
翻译:机器学习中最困难的任务是解释受过训练的浅神经网络。深神经网络(DNNs)在大量任务上提供了令人印象深刻的结果,但一般仍不清楚这种经过训练的深神经网络是如何做出决策的。具有显著重要性的是浅层和深层神经网络中使用的最重要和流行的判读技术。在本文中,我们开发了一种算法,将Garson Algorithm的概念扩展为解释基于深海信仰网络的自动编码(DBNA)的概念。它用于确定DBN中每个输入特征的贡献。它可以用于许多隐藏层的任何神经网络。这一方法的有效性在分类和从文献中采集的回归数据集上都进行了测试。这种方法确定的重要特征与Wald Chi 广场(\ c{hi} 2) 获得的特征进行比较。对于4个分类数据集中的2个和5个回归数据集中的2个,我们提出的方法导致确定质量更高的特征,从而在统计上对Wald {c}2 得出更有意义的结果。