Smart meter measurements, though critical for accurate demand forecasting, face several drawbacks including consumers' privacy, data breach issues, to name a few. Recent literature has explored Federated Learning (FL) as a promising privacy-preserving machine learning alternative which enables collaborative learning of a model without exposing private raw data for short term load forecasting. Despite its virtue, standard FL is still vulnerable to an intractable cyber threat known as Byzantine attack carried out by faulty and/or malicious clients. Therefore, to improve the robustness of federated short-term load forecasting against Byzantine threats, we develop a state-of-the-art differentially private secured FL-based framework that ensures the privacy of the individual smart meter's data while protect the security of FL models and architecture. Our proposed framework leverages the idea of gradient quantization through the Sign Stochastic Gradient Descent (SignSGD) algorithm, where the clients only transmit the `sign' of the gradient to the control centre after local model training. As we highlight through our experiments involving benchmark neural networks with a set of Byzantine attack models, our proposed approach mitigates such threats quite effectively and thus outperforms conventional Fed-SGD models.
翻译:智能电表测量虽然对于准确的需求预测至关重要,但面临着消费者隐私、数据泄露等诸多缺点。最近的文献探索了联邦学习 (FL) 作为一种有前途的隐私保护机器学习方法,可以在不暴露私有原始数据的情况下进行模型的协作学习,从而用于短期负荷预测。尽管FL具有优点,但标准FL仍然容易遭受一种不可解决的网络威胁,称为拜占庭攻击,这种攻击是由故障和/或恶意客户发动的。因此,为了提高联邦短期负荷预测在面临拜占庭攻击时的承受能力,我们开发了一个最先进的差分隐私安全FL框架,确保每个客户端智能电表数据的隐私性,同时保护FL模型和架构的安全性。我们提出的框架利用了梯度量化的思想,通过标志随机梯度下降 (SignSGD) 算法,客户仅在本地模型训练后将梯度的“符号”传输到控制中心。通过我们的实验涉及基准神经网络和一组拜占庭攻击模型,我们的方法可以有效地缓解这些威胁,从而优于传统的Fed-SGD模型。