To raise awareness of the huge impact Deep Learning (DL) has on the environment, several works have tried to estimate the energy consumption and carbon footprint of DL-based systems across their life cycle. However, the estimations for energy consumption in the training stage usually rely on assumptions that have not been thoroughly tested. This study aims to move past these assumptions by leveraging the relationship between energy consumption and two relevant design decisions in DL training; model architecture, and training environment. To investigate these relationships, we collect multiple metrics related to energy efficiency and model correctness during the models' training. Then, we outline the trade-offs between the measured energy consumption and the models' correctness regarding model architecture, and their relationship with the training environment. Finally, we study the training's power consumption behavior and propose four new energy estimation methods. Our results show that selecting the proper model architecture and training environment can reduce energy consumption dramatically (up to 80.72%) at the cost of negligible decreases in correctness. Also, we find evidence that GPUs should scale with the models' computational complexity for better energy efficiency. Furthermore, we prove that current energy estimation methods are unreliable and propose alternatives 2x more precise.
翻译:暂无翻译