We propose a graph-oriented attention-based explainability method for tabular data. Tasks involving tabular data have been solved mostly using traditional tree-based machine learning models which have the challenges of feature selection and engineering. With that in mind, we consider a transformer architecture for tabular data, which is amenable to explainability, and present a novel way to leverage self-attention mechanism to provide explanations by taking into account the attention matrices of all layers as a whole. The matrices are mapped to a graph structure where groups of features correspond to nodes and attention values to arcs. By finding the maximum probability paths in the graph, we identify groups of features providing larger contributions to explain the model's predictions. To assess the quality of multi-layer attention-based explanations, we compare them with popular attention-, gradient-, and perturbation-based explanability methods.
翻译:我们为表格数据提出了一个以图表为主的、以关注为主的解释方法,涉及表格数据的任务大多使用传统的基于树的机器学习模型来解决,这些模型具有特征选择和工程的挑战。考虑到这一点,我们考虑为表格数据建立一个变压器结构,这个结构可以解释,并且提出一种新的办法,利用自我注意机制,在考虑到所有层次的注意矩阵的情况下提供解释。这些矩阵被映射成一个图表结构,其中各组特征与节点和弧的注意值相对应。通过在图表中找到最大概率路径,我们确定了提供更大贡献的一组特征,以解释模型的预测。为了评估多层关注解释的质量,我们将它们与以关注、梯度和扰动为基础的解释方法进行比较。</s>