Non-intrusive load monitoring (NILM) aims at decomposing the total reading of the household power consumption into appliance-wise ones, which is beneficial for consumer behavior analysis as well as energy conservation. NILM based on deep learning has been a focus of research. To train a better neural network, it is necessary for the network to be fed with massive data containing various appliances and reflecting consumer behavior habits. Therefore, data cooperation among utilities and DNOs (distributed network operators) who own the NILM data has been increasingly significant. During the cooperation, however, risks of consumer privacy leakage and losses of data control rights arise. To deal with the problems above, a framework to improve the performance of NILM with federated learning (FL) has been set up. In the framework, model weights instead of the local data are shared among utilities. The global model is generated by weighted averaging the locally-trained model weights to gather the locally-trained model information. Optimal model selection help choose the model which adapts to the data from different domains best. Experiments show that this proposal improves the performance of local NILM runners. The performance of this framework is close to that of the centrally-trained model obtained by the convergent data without privacy protection.
翻译:非侵入性负载监测(NILM)旨在将家庭电力消费的全读分解成适合消费者行为分析和节能的电器,这有利于消费者行为分析以及节能。基于深层学习的NILM一直是研究的焦点。为了培训更好的神经网络,网络必须配备包含各种电器的大量数据并反映消费者行为习惯。因此,公用事业和拥有NILM数据的DNOs(分布式网络运营商)之间的数据合作越来越重要。但在合作期间,消费者隐私渗漏和数据控制权丧失的风险出现。为了解决上述问题,已经建立了一个框架,用联合学习(FLF)改进NILM的性能。在这个框架中,各公用事业部门之间共享模型的权重而不是当地数据。全球模型的生成方法是,将当地培训的模型权重加权平均,以收集当地培训的模型信息。最佳模式的选择有助于选择适应不同领域数据的模型。实验显示,这项提议改进了当地国家低层LM数据库的业绩,而没有集中式数据保护框架。