Inspired by the success of language models (LM), scaling up deep learning recommendation systems (DLRS) has become a recent trend in the community. All previous methods tend to scale up the model parameters during training time. However, how to efficiently utilize and scale up computational resources during test time remains underexplored, which can prove to be a scaling-efficient approach and bring orthogonal improvements in LM domains. The key point in applying test-time scaling to DLRS lies in effectively generating diverse yet meaningful outputs for the same instance. We propose two ways: One is to explore the heterogeneity of different model architectures. The other is to utilize the randomness of model initialization under a homogeneous architecture. The evaluation is conducted across eight models, including both classic and SOTA models, on three benchmarks. Sufficient evidence proves the effectiveness of both solutions. We further prove that under the same inference budget, test-time scaling can outperform parameter scaling. Our test-time scaling can also be seamlessly accelerated with the increase in parallel servers when deployed online, without affecting the inference time on the user side. Code is available.
翻译:受语言模型(LM)成功的启发,扩展深度学习推荐系统(DLRS)已成为该领域近期的趋势。以往所有方法均倾向于在训练阶段扩展模型参数。然而,如何在测试阶段高效利用并扩展计算资源仍缺乏深入探索,这被证明是一种扩展效率更高的方法,并能在语言模型领域带来正交性改进。将测试时扩展应用于DLRS的关键在于为同一实例有效生成多样化且具有意义的输出。我们提出两种实现途径:一是探索不同模型架构的异质性;二是在同构架构下利用模型初始化的随机性。评估在三个基准数据集上对八个模型(包括经典模型和SOTA模型)进行。充分证据证明两种方案均具有有效性。我们进一步证明,在相同推理预算下,测试时扩展能够超越参数扩展的效果。当在线部署时,我们的测试时扩展方法可随并行服务器数量的增加实现无缝加速,且不影响用户端的推理时间。代码已开源。