Graph Neural Networks (GNNs) aim to extend deep learning techniques to graph data and have achieved significant progress in graph analysis tasks (e.g., node classification) in recent years. However, similar to other deep neural networks like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), GNNs behave like a black box with their details hidden from model developers and users. It is therefore difficult to diagnose possible errors of GNNs. Despite many visual analytics studies being done on CNNs and RNNs, little research has addressed the challenges for GNNs. This paper fills the research gap with an interactive visual analysis tool, GNNVis, to assist model developers and users in understanding and analyzing GNNs. Specifically, Parallel Sets View and Projection View enable users to quickly identify and validate error patterns in the set of wrong predictions; Graph View and Feature Matrix View offer a detailed analysis of individual nodes to assist users in forming hypotheses about the error patterns. Since GNNs jointly model the graph structure and the node features, we reveal the relative influences of the two types of information by comparing the predictions of three models: GNN, Multi-Layer Perceptron (MLP), and GNN Without Using Features (GNNWUF). Two case studies and interviews with domain experts demonstrate the effectiveness of GNNVis in facilitating the understanding of GNN models and their errors.
翻译:近些年来,GNNNS旨在扩大深层次的学习技术,以图解数据,并在图表分析任务(如节点分类)方面取得显著进展。然而,与其他深层次的神经网络类似,如Convolual神经网络(CNNNS)和经常性神经网络(RNNS),GNNS表现得像一个黑盒子,其细节隐藏于模型开发者和用户,因此难以诊断GNNS的可能错误。尽管在CNN和RNNS上进行了许多视觉分析研究,但几乎没有研究解决GNNS面临的挑战。本文用交互式视觉分析工具GNNNVS填补了研究差距,以协助模型开发者和用户理解和分析GNNNS。具体地说,平行设置视图和投影视图使用户能够快速识别和验证一组错误预测中的错误模式。图表和特征矩阵视图提供了详细的分析,以协助用户就错误模式作出假设。由于GNNNNS和节点的节点结构和节点特征,我们用交互式视觉分析工具展示了GNNNF访谈的两种类型,我们用G-NF访谈中的两种GNF访谈的模型比较了G-NF案例研究案例研究案例研究案例研究案例研究的案例研究的案例研究案例研究案例研究案例研究。