Explainability techniques for Graph Neural Networks still have a long way to go compared to explanations available for both neural and decision decision tree-based models trained on tabular data. Using a task that straddles both graphs and tabular data, namely Entity Matching, we comment on key aspects of explainability that are missing in GNN model explanations.
翻译:图表神经网络的可解释性技术仍然有很长的路要走,与在表格数据方面受过培训的神经模型和基于决策的树模型的解释性方法相比,我们还有很长的路要走。 我们利用一个将图表和表格数据(即实体匹配)相交不一的任务,就GNN模型解释中缺失的可解释性的关键方面发表了意见。