Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data. As they generalize the operations of classical CNNs on grids to arbitrary topologies, GNNs also bring much of the implementation challenges of their Euclidean counterparts. Model size, memory footprint, and energy consumption are common concerns for many real-world applications. Network binarization allocates a single bit to parameters and activations, thus dramatically reducing the memory requirements (up to 32x compared to single-precision floating-point numbers) and maximizing the benefits of fast SIMD instructions on modern hardware for measurable speedups. However, in spite of the large body of work on binarization for classical CNNs, this area remains largely unexplored in geometric deep learning. In this paper, we present and evaluate different strategies for the binarization of graph neural networks. We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks. In particular, we present the first dynamic graph neural network in Hamming space, able to leverage efficient k-NN search on binary vectors to speed-up the construction of the dynamic graph. We further verify that the binary models offer significant savings on embedded devices. Our code is publicly available on Github.
翻译:神经网络(GNNs)已经成为一个强大和灵活的非正常数据代表学习框架,它将古典CNN在网格上的操作推广到任意的地形学,因此,GNNS也给欧洲网络的对应方带来了许多实施挑战。模型大小、记忆足迹和能源消耗是许多现实世界应用的共同关切。网络二进制为参数和激活分配了一点点,从而大大减少了记忆要求(与单精度浮点数相比高达32x),并最大限度地提高了SIMD关于现代硬件的快速指示对可计量加速的效益。然而,尽管对古典CNNS的双轨化做了大量工作,但这一领域在几何深度学习中仍基本上没有被探索。在本文件中,我们介绍和评估了图神经网络的二进制化的不同战略。我们通过仔细设计模型和对培训过程的控制,只能对硬度基准的精度进行硬度成本培训。特别是,我们展示了首部动态的硬质神经网络,在哈姆大学的硬质硬体神经网络上可以进一步搜索。