Rapid advances in artificial intelligence (AI) technology have led to significant accuracy improvements in a myriad of application domains at the cost of larger and more compute-intensive models. Training such models on massive amounts of data typically requires scaling to many compute nodes and relies heavily on collective communication algorithms, such as all-reduce, to exchange the weight gradients between different nodes. The overhead of these collective communication operations in a distributed AI training system can bottleneck its performance, with more pronounced effects as the number of nodes increases. In this paper, we first characterize the all-reduce operation overhead by profiling distributed AI training. Then, we propose a new smart network interface card (NIC) for distributed AI training systems using field-programmable gate arrays (FPGAs) to accelerate all-reduce operations and optimize network bandwidth utilization via data compression. The AI smart NIC frees up the system's compute resources to perform the more compute-intensive tensor operations and increases the overall node-to-node communication efficiency. We perform real measurements on a prototype distributed AI training system comprised of 6 compute nodes to evaluate the performance gains of our proposed FPGA-based AI smart NIC compared to a baseline system with regular NICs. We also use these measurements to validate an analytical model that we formulate to predict performance when scaling to larger systems. Our proposed FPGA-based AI smart NIC enhances overall training performance by 1.6x at 6 nodes, with an estimated 2.5x performance improvement at 32 nodes, compared to the baseline system using conventional NICs.
翻译:人工智能(AI)技术的迅速进步导致大量应用领域的准确性有了显著改善,以较大和更加计算密集的模式为代价,对大量数据进行这类模型的培训,通常需要推广到许多计算节点,并大量依赖集体通信算法,如全减法,以在不同节点之间交换权重梯度。在分布式AI培训系统中,这些集体通信业务的管理费用可以抑制其性能,随着节点数量的增加,影响会更加明显。在本文件中,我们首先通过对分布式AI培训进行剖析,将全部业务间接费用定性为所有编辑。然后,我们建议为使用外地可编程门阵列(FPGAs)来分发的AI培训系统开发一个新的智能网络接口卡,以加快所有可编程的运行速度,并通过数据压缩优化优化网络带宽度。