Community detection is the problem of identifying natural divisions in networks. Efficient parallel algorithms for identifying such divisions is critical in a number of applications, where the size of datasets have reached significant scales. This technical report presents one of the most efficient implementations of the Leiden algorithm, a high quality community detection method. On a server equipped with dual 16-core Intel Xeon Gold 6226R processors, our Leiden implementation, which we term as GVE-Leiden, outperforms the original Leiden, igraph Leiden, NetworKit Leiden, and cuGraph Leiden (running on NVIDIA A100 GPU) by 436x, 104x, 8.2x, and 3.0x respectively - achieving a processing rate of 403M edges/s on a 3.8B edge graph. In addition, GVE-Leiden improves performance at an average rate of 1.6x for every doubling of threads.
翻译:暂无翻译