Deep neural networks (DNNs) have shown exceptional performances in a wide range of tasks and have become the go-to method for problems requiring high-level predictive power. There has been extensive research on how DNNs arrive at their decisions, however, the inherently uninterpretable networks remain up to this day mostly unobservable "black boxes". In recent years, the field has seen a push towards interpretable neural networks, such as the visually interpretable Neural Additive Models (NAMs). We propose a further step into the direction of intelligibility beyond the mere visualization of feature effects and propose Structural Neural Additive Models (SNAMs). A modeling framework that combines classical and clearly interpretable statistical methods with the predictive power of neural applications. Our experiments validate the predictive performances of SNAMs. The proposed framework performs comparable to state-of-the-art fully connected DNNs and we show that SNAMs can even outperform NAMs while remaining inherently more interpretable.
翻译:深神经网络(DNN)在一系列广泛的任务中表现出了非凡的性能,并已成为解决需要高水平预测力的问题的方法。对于DNN如何作出决定,已经进行了广泛的研究,然而,从本质上看无法解释的网络直到今天大部分是不可观察的“黑盒 ” 。近年来,实地已经看到向可解释的神经网络的推进,如视觉可解释的神经添加模型(NAMs ) 。我们建议进一步向不可理解的方向迈进,超越单纯的特征效应视觉化,并提出结构神经添加模型(SNAMs ) 。一个模型框架,将古典和明确可解释的统计方法与神经应用的预测力结合起来。我们的实验验证了SNAMs的预测性能。拟议的框架与完全连接的DNNMs 的状态相当,我们显示SNAMs甚至能够超越了非美的NAMs,同时仍然具有内在的可解释性。