Embedding tables are used by machine learning systems to work with categorical features. These tables can become exceedingly large in modern recommendation systems, necessitating the development of new methods for fitting them in memory, even during training. The best previous methods for table compression are so called "post training" quantization schemes such as "product" and "residual" quantization (Gray & Neuhoff, 1998). These methods replace table rows with references to k-means clustered "codewords". Unfortunately, clustering requires prior knowledge of the table to be compressed, which limits the memory savings to inference time and not training time. Hence, recent work, like the QR method (Shi et al., 2020), has used random references (linear sketching), which can be computed with hash functions before training. Unfortunately, the compression achieved is inferior to that achieved by post-training quantization. The new algorithm, CQR, shows how to get the best of two worlds by combining clustering and sketching: First IDs are randomly assigned to a codebook and codewords are trained (end to end) for an epoch. Next, we expand the codebook and apply clustering to reduce the size again. Finally, we add new random references and continue training. We show experimentally close to those of post-training quantization with the training time memory reductions of sketch-based methods, and we prove that our method always converges to the optimal embedding table for least-squares training.
翻译:嵌入式表格被机器学习系统用来使用绝对特性。 这些表格在现代建议系统中可能变得非常庞大, 以至于即使在培训期间, 也需要开发新的方法来将表格安装在记忆中。 表格压缩的最好方法就是所谓的“ 后培训” 量化计划, 比如“ 产品” 和“ 累进” 量化 (Gray & Neuhoff, 1998年) 。 这些方法可以取代表格行, 引用 k- 手段分组的“ 代码 ” 。 不幸的是, 分组需要压缩表格的先前知识, 从而限制记忆节减时间而不是培训时间。 因此, 最近的工作, 比如 QR 方法( Shi 等人, 2020年), 使用了随机引用( 线性素描) 的量化计划。 不幸的是, 所实现的压缩比培训后的量化要低。 新的算法, CQR 显示如何通过合并组合和草图来获得两个世界的最好位置: 第一次ID 被随机指定为代码和代码化的参考, 最不易被训练的代码化( 到最后) 。 我们继续将那些最接近的缩化的缩化的缩缩化方法, 和升级的缩化方法 。