Graph neural networks have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this paper, we investigate the recently proposed randomly wired architectures in the context of graph neural networks. Instead of building deeper networks by stacking many layers, we prove that employing a randomly-wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths. We also provide extensive experimental evidence of the superior performance of randomly wired architectures over multiple tasks and four graph convolution definitions, using recent benchmarking frameworks that addresses the reliability of previous testing methodologies.
翻译:图表神经网络已成为解决学习和分析图表所定义的数据问题的主因。然而,若干结果显示,通过增加层数来提高性能的内在困难在于通过增加层数来提高性能。最近的工作将这种现象归因于在基于图形的任务中提取节点特征所特有的现象,即需要同时考虑多个相邻区域大小并适应性地调和它们。在本文中,我们调查了在图形神经网络中最近提议的随机带线结构。我们不是通过堆叠多层来建立更深层次的网络,而是证明使用随机带线结构可能是提高网络能力和获得更丰富演示的更有效方法。我们表明,这些结构表现得像路径的组合,能够将不同大小的可接受领域的贡献合并起来。此外,这些可接受的领域也可以通过在路径上可训练的重量来调整为更广泛或更窄。我们还提供了广泛的实验性证据,证明随机带线结构在多个任务和四个图形相容定义上的优性表现,我们使用最近的基准框架来处理以往测试方法的可靠性。