The field of deep learning has witnessed a remarkable shift towards extremely compute- and memory-intensive neural networks. These newer larger models have enabled researchers to advance state-of-the-art tools across a variety of fields. This phenomenon has spurred the development of algorithms for distributed training of neural networks over a larger number of hardware accelerators. In this paper, we discuss and compare current state-of-the-art frameworks for large scale distributed deep learning. First, we survey current practices in distributed learning and identify the different types of parallelism used. Then, we present empirical results comparing their performance on large image and language training tasks. Additionally, we address their statistical efficiency and memory consumption behavior. Based on our results, we discuss algorithmic and implementation portions of each framework which hinder performance.
翻译:深层学习领域出现了向极其计算和记忆密集的神经网络的显著转变。 这些较新的较大模型使研究人员能够在各个领域推进最先进的工具。 这种现象刺激了神经网络在更多硬件加速器上分布培训的算法的开发。 在本文中,我们讨论并比较了目前大规模分布式深层学习的最新框架。 首先,我们调查了分布式学习的现行做法,并找出了所使用的不同类型的平行主义。 然后,我们提出了比较其在大型图像和语言培训任务方面的表现的经验结果。 此外,我们讨论了他们的统计效率和记忆消耗行为。 根据我们的成果,我们讨论了妨碍业绩的每个框架的算法和执行部分。