Today, it is more important than ever before for users to have trust in the models they use. As Machine Learning models fall under increased regulatory scrutiny and begin to see more applications in high-stakes situations, it becomes critical to explain our models. Piecewise Linear Neural Networks (PLNN) with the ReLU activation function have quickly become extremely popular models due to many appealing properties; however, they still present many challenges in the areas of robustness and interpretation. To this end, we introduce novel methodology toward simplification and increased interpretability of Piecewise Linear Neural Networks for classification tasks. Our methods include the use of a trained, deep network to produce a well-performing, single-hidden-layer network without further stochastic training, in addition to an algorithm to reduce flat networks to a smaller, more interpretable size with minimal loss in performance. On these methods, we conduct preliminary studies of model performance, as well as a case study on Wells Fargo's Home Lending dataset, together with visual model interpretation.
翻译:今天,对于用户来说,信任他们所使用的模型比以往任何时候更加重要。随着机器学习模型受到更多的监管监督,并开始看到更多在高占用情况下的应用,我们解释模型变得至关重要。 使用RELU激活功能的Peaswith线形神经网络(PLNN)由于许多吸引力特性而迅速成为极受欢迎的模型;然而,它们仍然在稳健性和解释方面提出了许多挑战。 为此,我们引入了简化和增加PixWise线形神经网络在分类任务中的可解释性的新方法。我们的方法包括使用训练有素的深层网络,在没有进一步随机化培训的情况下制作出一个运行良好、单层层网络,此外,还采用算法将平板网络缩小到更小、更可解释的大小,且性能损失最小的算法。 在这些方法上,我们对模型性能进行了初步研究,并对Wells Fargo's Home Lending数据集进行了案例研究,同时进行视觉模型解释。