Precise load forecasting in buildings could increase the bill savings potential and facilitate optimized strategies for power generation planning. With the rapid evolution of computer science, data-driven techniques, in particular the Deep Learning models, have become a promising solution for the load forecasting problem. These models have showed accurate forecasting results; however, they need abundance amount of historical data to maintain the performance. Considering the new buildings and buildings with low resolution measuring equipment, it is difficult to get enough historical data from them, leading to poor forecasting performance. In order to adapt Deep Learning models for buildings with limited and scarce data, this paper proposes a Building-to-Building Transfer Learning framework to overcome the problem and enhance the performance of Deep Learning models. The transfer learning approach was applied to a new technique known as Transformer model due to its efficacy in capturing data trends. The performance of the algorithm was tested on a large commercial building with limited data. The result showed that the proposed approach improved the forecasting accuracy by 56.8% compared to the case of conventional deep learning where training from scratch is used. The paper also compared the proposed Transformer model to other sequential deep learning models such as Long-short Term Memory (LSTM) and Recurrent Neural Network (RNN). The accuracy of the transformer model outperformed other models by reducing the root mean square error to 0.009, compared to LSTM with 0.011 and RNN with 0.051.
翻译:由于计算机科学的迅速发展,数据驱动技术,特别是深学习模型,已成为处理载荷预测问题的有希望的解决办法。这些模型显示了准确的预测结果;然而,它们需要大量的历史数据来保持性能。考虑到新的建筑和分辨率测量设备低的建筑,很难从这些建筑获得足够的历史数据,从而导致预测性能差。为了对数据有限和稀缺的建筑改造深造学习模型,本文件建议建立一个建筑到建筑的转移学习框架,以克服问题,提高深学习模型的性能。转让学习方法被应用于被称为变换模型的新技术,因其在捕捉数据趋势方面的效力。算法的性能在大型商业建筑上测试,但数据有限。结果显示,拟议的方法提高了预测准确度56.8%,而采用传统的深层学习方法则使用从零到零的训练。本文件还比较了拟议的变换模型与其他连续深学习模型,例如长时间短时间存储模型(LSHERMTM) 和不断更新的内空数据库,比对内基数据系统进行精确度分析。