Traditional tabular classifiers provide explainable decision-making with interpretable features(concepts). However, using their explainability in vision tasks has been limited due to the pixel representation of images. In this paper, we design Img2Tabs that classify images by concepts to harness the explainability of tabular classifiers. Img2Tabs encode image pixels into tabular features by StyleGAN inversion. Since not all of the resulting features are class-relevant or interpretable due to their generative nature, we expect Img2Tab classifiers to discover class-relevant concepts automatically from the StyleGAN features. Thus, we propose a novel method using the Wasserstein-1 metric to quantify class-relevancy and interpretability simultaneously. Using this method, we investigate whether important features extracted by tabular classifiers are class-relevant concepts. Consequently, we determine the most effective classifier for Img2Tabs in terms of discovering class-relevant concepts automatically from StyleGAN features. In evaluations, we demonstrate concept-based explanations through importance and visualization. Img2Tab achieves top-1 accuracy that is on par with CNN classifiers and deep feature learning baselines. Additionally, we show that users can easily debug Img2Tab classifiers at the concept level to ensure unbiased and fair decision-making without sacrificing accuracy.
翻译:传统列表分类器提供可解释的特性( 概念) 。 但是, 由于图像的像素表达方式, 在视觉任务中使用它们的可解释性是有限的, 因而在视觉任务中使用它们的可解释性是有限的。 在本文中, 我们设计了 Img2Tab 将图像按概念分类, 以利用列表分类器的可解释性进行解释性分类。 Img2Tab 将图像像素编码成由StypeGAN 翻版的列表特性。 由于由此产生的特性并非都与阶级相关或可解释, 我们期望 Img2Tab 分类器能自动从StyleGAN 特性中发现与阶级相关的概念。 因此, 我们建议使用一种新颖的方法, 使用 瓦塞斯坦-1 标准, 来量化分类器分类器的可同时理解性和可解释性。 使用这种方法, 我们调查表格分类器所提取的重要特性是否与阶级相关概念有关。 因此, 我们确定 Img2Tab 最有效的分类器在SyleGAN 特征上可以通过重要性和可视觉化来展示概念解释。 Imgtab 直观的精确性定义。