Non-intrusive load monitoring (NILM) decomposes the total load reading into appliance-level load signals. Many deep learning-based methods have been developed to accomplish NILM, and the training of deep neural networks (DNN) requires massive load data containing different types of appliances. For local data owners with inadequate load data but expect to accomplish a promising model performance, the conduction of effective NILM co-modelling is increasingly significant. While during the cooperation of local data owners, data exchange and centralized data storage may increase the risk of power consumer privacy breaches. To eliminate the potential risks, a novel NILM method named Fed-NILM ap-plying Federated Learning (FL) is proposed in this paper. In Fed-NILM, local parameters instead of load data are shared among local data owners. The global model is obtained by weighted averaging the parameters. In the experiments, Fed-NILM is validated on two real-world datasets. Besides, a comparison of Fed-NILM with locally-trained NILMs and the centrally-trained one is conducted in both residential and industrial scenarios. The experimental results show that Fed-NILM outperforms locally-trained NILMs and approximate the centrally-trained NILM which is trained on the entire load dataset without privacy preservation.
翻译:非侵入性负载监测(NILM)将总负荷读数分解成设备级负载信号。许多深层次的基于学习的方法已经开发完成NILM,而深神经网络的培训需要包含不同种类电器的大量负载数据。对于负载数据不足但预期能取得有希望的模型性能的地方数据所有者来说,开展有效的NILM共同建模越来越重要。虽然在当地数据所有者合作期间,数据交换和集中数据储存可能会增加侵犯消费者隐私的风险。为了消除潜在风险,本文提出了名为FD-NILM ap-plied Fedlead(FL)的新型NILM方法。在Fed-NILM中,当地参数而不是负载数据由当地数据所有者共享。全球模型通过加权平均参数获得。在实验中,Fed-NILM(Fed-NILM)与当地培训的NILM(NILM)和中央培训的ILM(FD-M-M)方法在驻地和工业级两种假设中进行,经培训的IM(M-C-CRM)核心数据模拟试验结果显示FD-NIM的中央-M系统。