Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees. To demonstrate this, we show how NAMs can be used for multitask learning on synthetic data and on the COMPAS recidivism data due to their composability, and demonstrate that the differentiability of NAMs allows them to train more complex interpretable models for COVID-19.
翻译:深心神经网络(DNNS)是强大的黑箱预测器,在各种各样的任务上取得了令人印象深刻的成绩。然而,它们的准确性是以不易感知的代价而来的:通常不清楚的是它们是如何做出决策的。这妨碍了它们对高利害决策领域的适用性,如医疗保健。我们提议神经添加模型,将DNS的一些表现性和普遍添加模型的内在内在智能结合起来。不结盟运动学习的是神经网络的线性组合,每个网络都参加一个单一输入特征。这些网络经过联合培训,可以任意了解其输入特征和输出之间的复杂关系。我们在回归和分类数据集方面的实验表明,不结盟运动比在物流回归和浅度决定树等广泛使用的不易感知模型更加准确。它们与现有最先进的通用添加模型类似,但更灵活,因为它们以神经网为基础,而不是以振动树为基础。为了证明这一点,我们展示了如何利用不结盟运动来在合成数据上进行多塔式学习,并学习COMAS的累进数据,因为其变异性能性性性能让COVI能够使用不同的COVI模型。