The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change. With the progress of efficient deep learning techniques, e.g., model compression, researchers can obtain efficient models with fewer parameters and smaller latency. However, most of the existing efficient deep learning methods do not explicitly consider energy consumption as a key performance indicator. Furthermore, existing methods mostly focus on the inference costs of the resulting efficient models, but neglect the notable energy consumption throughout the entire life cycle of the algorithm. In this paper, we present the first large-scale energy consumption benchmark for efficient computer vision models, where a new metric is proposed to explicitly evaluate the full-cycle energy consumption under different model usage intensity. The benchmark can provide insights for low carbon emission when selecting efficient deep learning algorithms in different model usage scenarios.
翻译:深层学习模型的能源消耗正在以惊人的速度增长,这引起了人们的关切,因为在全球变暖和气候变化的背景下,对碳中和的潜在负面影响。随着高效深层学习技术的进展,例如模型压缩,研究人员可以以较少的参数和较少的延缓度获得高效模型。然而,大多数现有的高效深层学习方法并未明确将能源消耗视为关键业绩指标。此外,现有方法主要侧重于由此产生的高效模型的推论成本,但忽视了算法整个生命周期中显著的能源消耗。在本文件中,我们提出了高效计算机愿景模型的第一个大型能源消费基准,其中提出了在不同的模型使用强度下明确评估全周期能源消费的新指标。基准可以在不同的模型使用情景中选择高效深层学习算法时,为低碳排放提供见解。