Deep learning has become popular because of its potential to achieve high accuracy in prediction tasks. However, accuracy is not always the only goal of statistical modelling, especially for models developed as part of scientific research. Rather, many scientific models are developed to facilitate scientific discovery, by which we mean to abstract a human-understandable representation of the natural world. Unfortunately, the opacity of deep neural networks limit their role in scientific discovery, creating a new demand for models that are transparently interpretable. This article is a field guide to transparent model design. It provides a taxonomy of transparent model design concepts, a practical workflow for putting design concepts into practice, and a general template for reporting design choices. We hope this field guide will help researchers more effectively design transparently interpretable models, and thus enable them to use deep learning for scientific discovery.
翻译:深层学习因其具有在预测任务中实现高度准确性的潜力而变得很受欢迎,然而,准确性并不总是统计建模的唯一目标,特别是对于作为科学研究的一部分而开发的模型而言,相反,许多科学模型的开发是为了便利科学发现,我们想通过科学发现来抽象出一个人类可以理解的自然世界的代表性。不幸的是,深层神经网络的不透明限制了其在科学发现中的作用,创造了对可透明解释模型的新需求。这篇文章是透明模型设计的实地指南。它提供了透明模型设计概念的分类、将设计概念付诸实践的实用工作流程以及报告设计选择的一般模板。我们希望这一实地指南将有助于研究人员更有效地设计透明、可解释的模式,从而使他们能够利用深层次的学习进行科学发现。