Scalability is an important consideration for deep graph neural networks. Inspired by the conventional pooling layers in CNNs, many recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning, such that the scalability and efficiency can be improved. However, these pooling-based methods are mainly tailored to a single graph-level task and pay more attention to local information, limiting their performance in multi-task settings which often require task-specific global information. In this paper, departure from these pooling-based efforts, we design a new approach called DOTIN (\underline{D}r\underline{o}pping \underline{T}ask-\underline{I}rrelevant \underline{N}odes) to reduce the size of graphs. Specifically, by introducing $K$ learnable virtual nodes to represent the graph embeddings targeted to $K$ different graph-level tasks, respectively, up to 90\% raw nodes with low attentiveness with an attention model -- a transformer in this paper, can be adaptively dropped without notable performance decreasing. Achieving almost the same accuracy, our method speeds up GAT by about 50\% on graph-level tasks including graph classification and graph edit distance (GED) with about 60\% less memory, on D\&D dataset. Code will be made publicly available in https://github.com/Sherrylone/DOTIN.
翻译:缩放性是深图神经网络的一个重要考虑因素。 在CNN传统集合层的启发下,许多最近的图表学习方法引入了缩小用于学习的图表大小的集合战略,这样可以改进缩放性和效率。然而,这些基于集合的方法主要针对单一图形层的任务,并更多地关注当地信息,限制其在往往需要特定任务的全球信息的多任务环境中的性能。在本文中,偏离这些基于集合的努力,我们设计了一种名为DOTIN(下线{D}下线{下线{线{线{线下线{线下线{线下线{线{线下线})的新方法,以缩小图表的大小。具体来说,通过引入$K的可学习虚拟节点,以图的形式嵌入以$@K美元为对象的多任务中,以关注度低的原始节点模型为最多90 ⁇,而本文中的变换器,可以在不显著的绩效下降低。在公开级别上实现近60的LA-G-OD的远程数据分类,将G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-C-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G-G