Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks, making them the go-to method for problems requiring high-level predictive power. Despite this success, the inner workings of DNNs are often not transparent, making them difficult to interpret or understand. This lack of interpretability has led to increased research on inherently interpretable neural networks in recent years. Models such as Neural Additive Models (NAMs) achieve visual interpretability through the combination of classical statistical methods with DNNs. However, these approaches only concentrate on mean response predictions, leaving out other properties of the response distribution of the underlying data. We propose Neural Additive Models for Location Scale and Shape (NAMLSS), a modelling framework that combines the predictive power of classical deep learning models with the inherent advantages of distributional regression while maintaining the interpretability of additive models.
翻译:深神经网络(DNN)在各种任务中证明非常有效,使它们成为解决需要高水平预测力的问题的方法。尽管如此,DNN的内部作用往往不透明,难以解释或理解。这种缺乏解释导致近年来对内在可解释的神经网络的研究增加。神经添加模型(NAMs)等模型通过传统统计方法与DNS相结合,实现了可视化。然而,这些方法只集中于平均反应预测,而忽略了基本数据响应分布的其他特性。我们提出了位置尺度和形状的Neal Additive模型(NAMLSS),这是一个模型框架,将传统深层学习模型的预测力与分布回归的内在优势结合起来,同时保持添加模型的可解释性。