A recent trend shows that a general class of models, e.g., BERT, GPT-3, CLIP, trained on broad data at scale have shown a great variety of functionalities with a single learning architecture. In this work, we explore the possibility of general-purpose user representation learning by training a universal user encoder at large scales. We demonstrate that the scaling law holds in the user modeling areas, where the training error scales as a power-law with the amount of compute. Our Contrastive Learning User Encoder (CLUE), optimizes task-agnostic objectives, and the resulting user embeddings stretches our expectation of what is possible to do in various downstream tasks. CLUE also shows great transferability to other domains and systems, as performances on an online experiment shows significant improvements in online Click-Through-Rate (CTR). Furthermore, we also investigate how the performance changes according to the scale-up factors, i.e., model capacity, sequence length and batch size.
翻译:最近的一个趋势表明,经过大规模广泛数据培训的通用模型类别,如BERT、GPT-3、GLIP-3、CLIP等,展示了使用单一学习结构的多种功能。在这项工作中,我们探索了通过大规模培训通用用户编码器进行通用用户代言学习的可能性。我们证明,在用户建模领域存在着比例法,培训误差尺度与计算数量具有一定的功率法。我们对比性学习用户编码(CLUE)优化了任务-认知目标,以及由此形成的用户嵌入使我们对各种下游任务可能实现的预期拉长。 CLUE还显示了向其它领域和系统的巨大可转移性,因为在线实验的绩效显示在线点击-Trough-Rate(CTR)的显著改进。此外,我们还根据规模因素(即模型能力、序列长度和批量大小)调查了性能的变化。