Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find \textit{untrained sparse subnetworks} at the initialization, that can match the performance of \textit{fully trained dense} GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).
翻译:最近的工作令人印象深刻地表明,在随机初始化的进化神经网络(CNNs)中存在着一个子网络,这种网络在初始化时可以与经过充分训练的密集网络(即未经训练的网络)的性能相匹配,而没有优化网络的重量。然而,在图形神经网络(GNNs)中,这种未经训练的子网络的存在仍然很神秘。在本文件中,我们进行了首次实物探索,以发现匹配未经训练的未受过训练的GNNs。在核心工具中,我们还可以在初始化时找到与经过充分训练的密集网络(NNN)的性能相匹配的子网络。除了这种已经令人鼓舞的类似性业绩发现外,我们发现未经训练的子网络可以大大缓解GNNN的过度移动问题,从而成为一个强大的工具,能够更深入地发现未受过训练的GNNS,而没有钟和哨子。我们还注意到,这种未经训练的子网络在初始化初始化时,能够吸引到超出传输的检测和投入的稳健健,包括GNIS基准结构。我们广泛评估了我们的各种方法。