Accurate wind turbine power curve models, which translate ambient conditions into turbine power output, are crucial for wind energy to scale and fulfill its proposed role in the global energy transition. While machine learning (ML) methods have shown significant advantages over parametric, physics-informed approaches, they are often criticised for being opaque 'black boxes', which hinders their application in practice. We apply Shapley values, a popular explainable artificial intelligence (XAI) method, and the latest findings from XAI for regression models, to uncover the strategies ML models have learned from operational wind turbine data. Our findings reveal that the trend towards ever larger model architectures, driven by a focus on test set performance, can result in physically implausible model strategies. Therefore, we call for a more prominent role of XAI methods in model selection. Moreover, we propose a practical approach to utilize explanations for root cause analysis in the context of wind turbine performance monitoring. This can help to reduce downtime and increase the utilization of turbines in the field.
翻译:准确的风力涡轮机功率曲线模型对于风能发展扮演着至关重要的角色,这些模型将环境条件转化为涡轮机的电力输出。尽管机器学习(ML)方法相对于参数化、基于物理学的方法表现出大幅提升,但它们往往被指责为是“黑盒模型”,这限制了它们在实践中的应用。本文应用Shapley值技术和最新的解释性人工智能(XAI)回归模型发现,以揭示ML模型从操作风力涡轮机数据中学到的策略。我们的研究发现,由于专注于测试集性能而导致的越来越大的模型架构趋势可能会产生物理上不合理的模型策略。因此,我们呼吁在模型选择中引入更多的解释性方法。此外,我们提出了一个实际方法来利用解释性人工智能方法进行风能涡轮机性能监测的根本原因分析,这有助于减少停机时间并提高现场涡轮机的利用率。