Diverse scientific and engineering research areas deal with discrete, time-stamped changes in large systems of interacting delay differential equations. Simulating such complex systems at scale on high-performance computing clusters demands efficient management of communication and memory. Inspired by the human cerebral cortex -- a sparsely connected network of $\mathcal{O}(10^{10})$ neurons, each forming $\mathcal{O}(10^{3})$--$\mathcal{O}(10^{4})$ synapses and communicating via short electrical pulses called spikes -- we study the simulation of large-scale spiking neural networks for computational neuroscience research. This work presents a novel network construction method for multi-GPU clusters and upcoming exascale supercomputers using the Message Passing Interface (MPI), where each process builds its local connectivity and prepares the data structures for efficient spike exchange across the cluster during state propagation. We demonstrate scaling performance of two cortical models using point-to-point and collective communication, respectively.
翻译:众多科学与工程研究领域涉及大型交互延迟微分方程系统中离散的、带时间戳的状态变化。在高性能计算集群上大规模模拟此类复杂系统,需要高效管理通信与内存。受人类大脑皮层——一个由约10^10个神经元组成的稀疏连接网络启发,每个神经元形成约10^3至10^4个突触,并通过称为脉冲的短时电信号进行通信——我们研究用于计算神经科学研究的大规模脉冲神经网络模拟。本文提出一种面向多GPU集群及未来百亿亿次超级计算机的新型网络构建方法,该方法基于消息传递接口(MPI),每个进程在状态传播过程中构建局部连接性,并为跨集群的高效脉冲交换准备数据结构。我们分别通过点对点通信和集体通信,展示了两种皮层模型的扩展性能。