Deep learning is becoming increasingly adopted in business and industry due to its ability to transform large quantities of data into high-performing models. These models, however, are generally regarded as black boxes, which, in spite of their performance, could prevent their use. In this context, the field of eXplainable AI attempts to develop techniques that temper the impenetrable nature of the models and promote a level of understanding of their behavior. Here we present our contribution to XAI methods in the form of a framework that we term SpecXAI, which is based on the spectral characterization of the entire network. We show how this framework can be used to not only understand the network but also manipulate it into a linear interpretable symbolic representation.
翻译:由于能够将大量数据转换成高性能模型,工商界越来越多地采用深层次学习,但这些模型通常被视为黑盒,尽管其性能不同,仍可防止其使用。在这方面,可移植的人工智能领域试图开发各种技术,以缓解模型的不可渗透性并促进对其行为的了解程度。我们在这里以我们称为SpecXAI(SpecXAI)的框架的形式介绍我们对XAI方法的贡献,这个框架以整个网络的光谱特征为基础。我们展示了如何利用这一框架不仅了解网络,而且还将它转化为可直线解释的象征性代表。