On-device session-based recommendation systems have been achieving increasing attention on account of the low energy/resource consumption and privacy protection while providing promising recommendation performance. To fit the powerful neural session-based recommendation models in resource-constrained mobile devices, tensor-train decomposition and its variants have been widely applied to reduce memory footprint by decomposing the embedding table into smaller tensors, showing great potential in compressing recommendation models. However, these model compression techniques significantly increase the local inference time due to the complex process of generating index lists and a series of tensor multiplications to form item embeddings, and the resultant on-device recommender fails to provide real-time response and recommendation. To improve the online recommendation efficiency, we propose to learn compositional encoding-based compact item representations. Specifically, each item is represented by a compositional code that consists of several codewords, and we learn embedding vectors to represent each codeword instead of each item. Then the composition of the codeword embedding vectors from different embedding matrices (i.e., codebooks) forms the item embedding. Since the size of codebooks can be extremely small, the recommender model is thus able to fit in resource-constrained devices and meanwhile can save the codebooks for fast local inference.Besides, to prevent the loss of model capacity caused by compression, we propose a bidirectional self-supervised knowledge distillation framework. Extensive experimental results on two benchmark datasets demonstrate that compared with existing methods, the proposed on-device recommender not only achieves an 8x inference speedup with a large compression ratio but also shows superior recommendation performance.
翻译:由于能源/资源消耗量和隐私保护水平低,在提供有希望的建议性业绩的同时,基于在线建议系统的建议系统日益受到越来越多的关注。为了适应资源限制的移动设备中强大的神经会议建议模型,我们广泛应用了高压列分解法及其变体来减少记忆足迹,将嵌入表分解成较小的发压器,显示出压缩建议模型的巨大潜力。然而,这些模型压缩技术大大增加了本地的推断时间,因为生成指数列表的过程复杂,并有一系列变数以形成项目嵌入,而结果的构件上调建议不提供实时回应和建议。为了改进在线建议效率,我们提议学习基于成文编码的编码缩放及其变异体。具体地说,每个项目都由包含数个代码的构成代码组成,我们学习嵌入矢量代表每个代码,而不是每个模型的缩放量。然后,由于从不同的嵌入式矩阵(e.,编码)中嵌入矢量的缩放量增加,因此,在嵌入项目中无法提供实时回应和建议实时反应。因此,用于快速缩缩缩缩缩缩缩缩缩缩缩缩缩缩的缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩的缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩图。