Personalized recommendation models (RecSys) are one of the most popular machine learning workload serviced by hyperscalers. A critical challenge of training RecSys is its high memory capacity requirements, reaching hundreds of GBs to TBs of model size. In RecSys, the so-called embedding layers account for the majority of memory usage so current systems employ a hybrid CPU-GPU design to have the large CPU memory store the memory hungry embedding layers. Unfortunately, training embeddings involve several memory bandwidth intensive operations which is at odds with the slow CPU memory, causing performance overheads. Prior work proposed to cache frequently accessed embeddings inside GPU memory as means to filter down the embedding layer traffic to CPU memory, but this paper observes several limitations with such cache design. In this work, we present a fundamentally different approach in designing embedding caches for RecSys. Our proposed ScratchPipe architecture utilizes unique properties of RecSys training to develop an embedding cache that not only sees the past but also the "future" cache accesses. ScratchPipe exploits such property to guarantee that the active working set of embedding layers can "always" be captured inside our proposed cache design, enabling embedding layer training to be conducted at GPU memory speed.
翻译:个人化建议模式( RecSys) 是超大型计算机提供的最受欢迎的机器学习工作量之一。 培训 RecSys 的关键挑战是其高存储能力要求, 达到数百GB的模型大小的肺结核。 在 RecSys 中, 所谓的嵌入层占了大部分记忆使用量。 因此当前系统使用混合的 CPU- GPU 设计, 使大型 CPU 存储存储存储存储层的记忆。 不幸的是, 培训嵌入包含多个与慢速 CPU 记忆不相符的记忆带宽密集操作, 导致性能管理。 先前提议在 GPU 存储中存储的嵌入层经常被存储, 以过滤嵌入层存储到 CPU 内存储量。 但本文观察了这些缓存设计的一些限制。 在这项工作中, 我们提出了一种完全不同的方法来设计 CPU- GPU 的嵌入缓存器设计。 我们提议的ScratchPipe 架构利用了REDS 培训的独特特性来开发嵌入缓存缓存缓存缓存缓存缓存器, 不仅看到过去,, 也能看到“ 未来” 缓存缓存缓存缓存缓存缓存缓存。 Screpreption 利用了我们的缓存层设计层, 将保证了我们在磁层内嵌入层内嵌入层 。