Deep learning based recommendation models (DLRM) are widely used in several business critical applications. Training such recommendation models efficiently is challenging primarily because they consist of billions of embedding-based parameters which are often stored remotely leading to significant overheads from embedding access. By profiling existing DLRM training, we observe that only 8.5% of the iteration time is spent in forward/backward pass while the remaining time is spent on embedding and model synchronization. Our key insight in this paper is that access to embeddings have a specific structure and pattern which can be used to accelerate training. We observe that embedding accesses are heavily skewed, with almost 1% of embeddings represent more than 92% of total accesses. Further, we observe that during training we can lookahead at future batches to determine exactly which embeddings will be needed at what iteration in the future. Based on these insight, we propose Bagpipe, a system for training deep recommendation models that uses caching and prefetching to overlap remote embedding accesses with the computation. We designed an Oracle Cacher, a new system component which uses our lookahead algorithm to generate optimal cache update decisions and provide strong consistency guarantees. Our experiments using three datasets and two models shows that our approach provides a speed up of up to 6.2x compared to state of the art baselines, while providing the same convergence and reproducibility guarantees as synchronous training.
翻译:深层学习基于建议模型(DLRM)被广泛用于若干商业关键应用程序。培训这类建议模型具有高效率的挑战性,因为嵌入的嵌入参数由数十亿个嵌入参数组成,这些嵌入参数往往被远程存储,导致嵌入存取中的重要间接费用。通过分析现有的DLRM培训,我们发现只有8.5%的迭代时间用于前向/后向传递,而剩下的时间则用于嵌入和模型同步。我们本文件的关键见解是,嵌入嵌入有特定的架构和模式,可用于加速培训。我们观察到嵌入的嵌入功能被严重扭曲,近1%的嵌入参数代表了全部访问量的92%以上。此外,我们观察到,在培训期间,我们可以先看未来的分批,以确定在前/后向后传递时需要确切的嵌入时间,而余下的时间则用于嵌入和模型的嵌入和模型同步。我们提出了用于培训深度建议模型的深度配置,该模型可以用来加速远程嵌入与计算工作重叠。我们设计了一个强大的Oracle Cacher系统组件,它提供强的系统组件,在使用我们看起来的同步速度的模型,同时提供我们的标准模型,以更新我们的标准模型将使我们的模模模模范的模的模的模的模模模模的模的模的模范,以更新决定。