Existing software-based energy measurements of NLP models are not accurate because they do not consider the complex interactions between energy consumption and model execution. We present IrEne, an interpretable and extensible energy prediction system that accurately predicts the inference energy consumption of a wide range of Transformer-based NLP models. IrEne constructs a model tree graph that breaks down the NLP model into modules that are further broken down into low-level machine learning (ML) primitives. IrEne predicts the inference energy consumption of the ML primitives as a function of generalizable features and fine-grained runtime resource usage. IrEne then aggregates these low-level predictions recursively to predict the energy of each module and finally of the entire model. Experiments across multiple Transformer models show IrEne predicts inference energy consumption of transformer models with an error of under 7% compared to the ground truth. In contrast, existing energy models see an error of over 50%. We also show how IrEne can be used to conduct energy bottleneck analysis and to easily evaluate the energy impact of different architectural choices. We release the code and data at https://github.com/StonyBrookNLP/irene.
翻译:现有NLP模型的基于软件的能源测量不准确,因为它们没有考虑到能源消耗和模型执行之间的复杂互动。 我们介绍了IrEne, 这是一种可解释和可扩展的能源预测系统,准确预测以变异器为基础的各种NLP模型的推断能源消耗。 IrEne 构建了一个示范树图,将NLP模型细分为进一步细分为低层次机器学习原始(ML)的模块。 IrEne 预测ML原始的推断能源消耗是通用特性和精细精细操作时间资源使用的一种函数。 IrEne 然后将这些低水平的预测汇总起来,以预测每个模块和整个模型的能量。 多个变异模型的实验显示, IrEne 预测变异器模型的能量消耗将进一步细分为低层次机器学习(ML)原始模型。 相比之下, 现有的能源模型则显示50%以上的误差。 我们还表明, IrEne 如何使用IrEne 来进行能源瓶式/NEBER 分析, 并轻松 数据 。