The progress of some AI paradigms such as deep learning is said to be linked to an exponential growth in the number of parameters. There are many studies corroborating these trends, but does this translate into an exponential increase in energy consumption? In order to answer this question we focus on inference costs rather than training costs, as the former account for most of the computing effort, solely because of the multiplicative factors. Also, apart from algorithmic innovations, we account for more specific and powerful hardware (leading to higher FLOPS) that is usually accompanied with important energy efficiency optimisations. We also move the focus from the first implementation of a breakthrough paper towards the consolidated version of the techniques one or two year later. Under this distinctive and comprehensive perspective, we study relevant models in the areas of computer vision and natural language processing: for a sustained increase in performance we see a much softer growth in energy consumption than previously anticipated. The only caveat is, yet again, the multiplicative factor, as future AI increases penetration and becomes more pervasive.
翻译:深层次学习等一些AI模式的进展据说与参数数量的指数增长有关。有许多研究证实了这些趋势,但这是否转化为能源消耗的指数增长?为了回答这个问题,我们把重点放在推论成本而不是培训成本上,因为前者只是由于多种复制因素而占了计算努力的大部分。此外,除了算法创新之外,我们还考虑到更具体和强大的硬件(导致更高的FLOPS),通常伴随着重要的能源效率优化。我们还把重点从首次执行突破性文件转向一年或两年后技术的综合版本。根据这一独特和全面的观点,我们研究计算机愿景和自然语言处理领域的相关模型:为了持续提高绩效,我们看到能源消费的增长比先前预期的要慢得多。唯一的告诫是,随着未来的AI增加渗透性并变得更加普遍,多复制性因素再次成为了。