Graph Neural Networks (GNNs) present a fundamental hardware challenge by fusing irregular, memory-bound graph traversals with regular, compute-intensive dense matrix operations. While frameworks such as PyTorch Geometric (PyG) and Deep Graph Library (DGL) prioritize high-level usability, they fail to address these divergent execution characteristics. As a result, they rely on generic kernels that suffer from poor cache locality, excessive memory movement, and substantial intermediate allocations. To address these limitations, we present Morphling, a domain-specific code synthesizer designed to bridge this gap. Morphling compiles high-level GNN specifications into portable, backend-specialized implementations targeting OpenMP, CUDA, and MPI. It achieves this by instantiating a library of optimized, architecture-aware primitives tailored to each execution environment. Morphling also incorporates a runtime sparsity-aware execution engine that dynamically selects dense or sparse execution paths using input feature statistics, reducing unnecessary computation on zero-valued entries. We evaluate Morphling on eleven real-world datasets spanning diverse graph structures, feature dimensionalities, and sparsity regimes. The results show that Morphling improves per-epoch training throughput by an average of 20X on CPUs and 19X on GPUs over PyG and DGL, with peak speedups reaching 66X. Morphling's memory-efficient layouts further reduce peak memory consumption by up to 15X, enabling large-scale GNN training on commodity hardware. These findings demonstrate that specialized, architecture-aware code synthesis provides an effective and scalable path toward high-performance GNN execution across diverse parallel and distributed platforms.
翻译:图神经网络(GNNs)通过融合不规则、内存受限的图遍历操作与规则化、计算密集的稠密矩阵运算,提出了一个基础性的硬件挑战。尽管PyTorch Geometric(PyG)和Deep Graph Library(DGL)等框架优先考虑高层易用性,但未能有效应对这些分化的执行特性。因此,它们依赖于通用内核,导致缓存局部性差、内存移动过度以及大量中间分配问题。为克服这些局限,我们提出了Morphling——一个旨在弥合此差距的领域专用代码合成器。Morphling将高层GNN规范编译为面向OpenMP、CUDA和MPI的可移植、后端专用实现。其通过实例化一个针对各执行环境定制、具备架构感知能力的优化基础算子库来实现这一目标。Morphling还集成了一个运行时稀疏感知执行引擎,该引擎能基于输入特征统计动态选择稠密或稀疏执行路径,从而减少对零值条目的无效计算。我们在涵盖不同图结构、特征维度和稀疏性模式的十一个真实数据集上评估Morphling。结果显示,相较于PyG和DGL,Morphling在CPU上平均每轮训练吞吐量提升20倍,GPU上提升19倍,峰值加速比达66倍。Morphling的内存高效布局进一步将峰值内存消耗降低达15倍,使得在商用硬件上进行大规模GNN训练成为可能。这些发现表明,专业化、架构感知的代码合成为跨多样化并行与分布式平台实现高性能GNN执行提供了有效且可扩展的路径。