In Federated Learning (FL) of click-through rate (CTR) prediction, users' data is not shared for privacy protection. The learning is performed by training locally on client devices and communicating only model changes to the server. There are two main challenges: (i) the client heterogeneity, making FL algorithms that use the weighted averaging to aggregate model updates from the clients have slow progress and unsatisfactory learning results; and (ii) the difficulty of tuning the server learning rate with trial-and-error methodology due to the big computation time and resources needed for each experiment. To address these challenges, we propose a simple online meta-learning method to learn a strategy of aggregating the model updates, which adaptively weighs the importance of the clients based on their attributes and adjust the step sizes of the update. We perform extensive evaluations on public datasets. Our method significantly outperforms the state-of-the-art in both the speed of convergence and the quality of the final learning results.
翻译:在联邦学习联盟(FL)点击率预测中,用户的数据不用于隐私保护,学习是通过在当地进行客户设备培训,并只向服务器通报模式变化来进行的,主要有两个挑战:(一) 客户差异性,使使用加权平均法算法来汇总客户提供的模型更新的加权平均数的FL算法进展缓慢,学习结果不令人满意;(二) 由于每次试验需要大量计算时间和资源,很难用试验方法调整服务器的学习率。为了应对这些挑战,我们提议了一个简单的在线元学习方法,以学习集成模型更新的战略,该战略根据客户的特性适应性地衡量客户的重要性,并调整更新的步数大小。我们对公共数据集进行了广泛的评价。我们的方法在趋同速度和最终学习结果的质量方面大大超过最新水平。