To meet the practical requirements of low latency, low cost, and good privacy in online intelligent services, more and more deep learning models are offloaded from the cloud to mobile devices. To further deal with cross-device data heterogeneity, the offloaded models normally need to be fine-tuned with each individual user's local samples before being put into real-time inference. In this work, we focus on the fundamental click-through rate (CTR) prediction task in recommender systems and study how to effectively and efficiently perform on-device fine-tuning. We first identify the bottleneck issue that each individual user's local CTR (i.e., the ratio of positive samples in the local dataset for fine-tuning) tends to deviate from the global CTR (i.e., the ratio of positive samples in all the users' mixed datasets on the cloud for training out the initial model). We further demonstrate that such a CTR drift problem makes on-device fine-tuning even harmful to item ranking. We thus propose a novel label correction method, which requires each user only to change the labels of the local samples ahead of on-device fine-tuning and can well align the locally prior CTR with the global CTR. The offline evaluation results over three datasets and five CTR prediction models as well as the online A/B testing results in Mobile Taobao demonstrate the necessity of label correction in on-device fine-tuning and also reveal the improvement over cloud-based learning without fine-tuning.
翻译:为了满足低潜值、低成本和在线智能服务中良好隐私等实际要求,越来越多的更深的学习模型被从云层降为移动设备。为了进一步处理跨设备数据差异性,卸载模型通常需要与每个用户的本地样本进行微调,然后才能进行实时推断。在这项工作中,我们侧重于在建议系统方面的基本点击-通率预测任务,并研究如何在设计上有效和高效地进行微调。我们首先确定每个用户的本地CTR(即当地数据集中正样的比例)往往偏离全球CTR(即所有用户云层中正选样的比例)的瓶颈问题。我们进一步证明,这种CTR(C)漂移问题使得基于操作的微调微调甚至有害于项目排序。我们因此建议了一个新的标签修正方法,要求每个用户仅改变C-TR(C)前三次的移动预测结果的标签,而不是C-TR(C-TR)前三次的升级。