Generalized Additive Models (GAMs) have quickly become the leading choice for fully-interpretable machine learning. However, unlike uninterpretable methods such as DNNs, they lack expressive power and easy scalability, and are hence not a feasible alternative for real-world tasks. We present a new class of GAMs that use tensor rank decompositions of polynomials to learn powerful, {\em fully-interpretable} models. Our approach, titled Scalable Polynomial Additive Models (SPAM) is effortlessly scalable and models {\em all} higher-order feature interactions without a combinatorial parameter explosion. SPAM outperforms all current interpretable approaches, and matches DNN/XGBoost performance on a series of real-world benchmarks with up to hundreds of thousands of features. We demonstrate by human subject evaluations that SPAMs are demonstrably more interpretable in practice, and are hence an effortless replacement for DNNs for creating interpretable and high-performance systems suitable for large-scale machine learning. Source code is available at https://github.com/facebookresearch/nbm-spam.
翻译:通用Additive 模型(GAMs) 很快成为完全解释机器学习的主要选择。 但是,与DNNs等非解释性方法不同,它们缺乏表达力和易于缩放,因此不是现实世界任务的一种可行替代方法。 我们展示了一种新的GAMs, 使用多语种高等级分解法学习强大、 完全解释的模型。 我们的名为 Scallable 多元Additive模型(SPAM) 的方法是不费力的可缩放和高等级的模型; 高等级特征相互作用没有组合参数爆炸。 SPAM 超越了当前所有可解释的方法, 并且匹配了DNN/XGBoost 在一系列现实世界基准上的表现, 其特征高达数十万个。 我们通过人类主题评估证明, SPAM 在实践中可以明显地解释, 因此, 成为DNNSs 创建可解释和高性能系统以适合大规模机器学习的模型 。 https://gibourbourb/facembream/comcom 代码可在 https://gistrat-basmassssearch/formspastssearchystems.