The widely-adopted practice is to train deep learning models with specialized hardware accelerators, e.g., GPUs or TPUs, due to their superior performance on linear algebra operations. However, this strategy does not employ effectively the extensive CPU and memory resources -- which are used only for preprocessing, data transfer, and scheduling -- available by default on the accelerated servers. In this paper, we study training algorithms for deep learning on heterogeneous CPU+GPU architectures. Our two-fold objective -- maximize convergence rate and resource utilization simultaneously -- makes the problem challenging. In order to allow for a principled exploration of the design space, we first introduce a generic deep learning framework that exploits the difference in computational power and memory hierarchy between CPU and GPU through asynchronous message passing. Based on insights gained through experimentation with the framework, we design two heterogeneous asynchronous stochastic gradient descent (SGD) algorithms. The first algorithm -- CPU+GPU Hogbatch -- combines small batches on CPU with large batches on GPU in order to maximize the utilization of both resources. However, this generates an unbalanced model update distribution which hinders the statistical convergence. The second algorithm -- Adaptive Hogbatch -- assigns batches with continuously evolving size based on the relative speed of CPU and GPU. This balances the model updates ratio at the expense of a customizable decrease in utilization. We show that the implementation of these algorithms in the proposed CPU+GPU framework achieves both faster convergence and higher resource utilization than TensorFlow on several real datasets and on two computing architectures -- an on-premises server and a cloud instance.
翻译:广泛采用的做法是以专门的硬件加速器(例如,GPU或TPU)来训练深层次学习模型,这些模型具有专门的硬件加速器(例如,GPU或TPU),因为其在线性代数操作方面的业绩优异。然而,这一战略没有有效地利用大量CPU和存储资源 -- -- 仅用于预处理、数据传输和调度 -- -- 在加速服务器上默认提供。在本文中,我们研究用于在混杂的 CPU+GPU结构上深层学习的训练算法。我们的两个双重目标 -- -- 最大限度地提高趋同率和资源的趋同率 -- -- 为了对设计空间进行有原则性的探索,我们首先引入一个通用的深层次学习框架,利用CPU和GPU之间的计算能力和内存等级差异 -- -- 利用CPUPU和GPU之间的计算结果,在不断演变的C- 和C- 不断演变的C- 更新的C- 使C- 和C- 不断更新的C- 和C- 更新的C- 和C- 更新的统计- 更新的C- 更新的C- 和成本的统计- 更新的计算方法,从而在不断更新的计算中,在不断更新的计算中,从而在不断更新的C- 和升级的计算中,在不断更新的计算方法上,产生。