Many recent studies focus on developing mechanisms to explain the black-box behaviors of neural networks (NNs). However, little work has been done to extract the potential hidden semantics (mathematical representation) of a neural network. A succinct and explicit mathematical representation of a NN model could improve the understanding and interpretation of its behaviors. To address this need, we propose a novel symbolic regression method for neural works (called SRNet) to discover the mathematical expressions of a NN. SRNet creates a Cartesian genetic programming (NNCGP) to represent the hidden semantics of a single layer in a NN. It then leverages a multi-chromosome NNCGP to represent hidden semantics of all layers of the NN. The method uses a (1+$\lambda$) evolutionary strategy (called MNNCGP-ES) to extract the final mathematical expressions of all layers in the NN. Experiments on 12 symbolic regression benchmarks and 5 classification benchmarks show that SRNet not only can reveal the complex relationships between each layer of a NN but also can extract the mathematical representation of the whole NN. Compared with LIME and MAPLE, SRNet has higher interpolation accuracy and trends to approximate the real model on the practical dataset.
翻译:为解决这一需要,我们为神经工程提出了一个创新的象征性回归方法(称为SRNet),以发现NN的数学表达方式。SRNet创建了一个Cartesian基因编程(NNCGP),以代表NNN中单个层的隐藏语义。然后,它利用一个多色谱NNCGP代表NNN所有层的隐藏语义。该方法使用一个(1+$\lambda$)的进化战略(称为MNNCGP-ES),以摘录NNM所有层的最后数学表达方式。关于12个象征性回归基准和5个分类基准的实验表明,SRNet不仅能够揭示NNN每个层之间的复杂关系,而且还能够提取NNMU所有层的隐藏语义性。 将IMAP的数学表达方式与整个IMU之间的精确度比较。