The progress of some AI paradigms such as deep learning is said to be linked to an exponential growth in the number of parameters. There are many studies corroborating these trends, but does this translate into an exponential increase in energy consumption? In order to answer this question we focus on inference costs rather than training costs, as the former account for most of the computing effort, solely because of the multiplicative factors. Also, apart from algorithmic innovations, we account for more specific and powerful hardware (leading to higher FLOPS) that is usually accompanied with important energy efficiency optimisations. We also move the focus from the first implementation of a breakthrough paper towards the consolidated version of the techniques one or two year later. Under this distinctive and comprehensive perspective, we study relevant models in the areas of computer vision and natural language processing: for a sustained increase in performance we see a much softer growth in energy consumption than previously anticipated. The only caveat is, yet again, the multiplicative factor, as future AI increases penetration and becomes more pervasive.
翻译:某些人工智能范例,如深度学习的进展据说与参数数量的指数增长有关。有许多研究证实这些趋势,但这是否转化为能源消耗的指数增长呢?为了回答这个问题,我们专注于推理成本而不是训练成本,因为前者占据了大部分计算工作,仅因乘法因子。此外,除了算法创新,我们考虑到更具体和强大的硬件(导致更高的FLOPS),通常伴随着重要的能源效率优化。我们还将重点从突破性论文的第一次实现转移到一两年后技术的巩固版本。在这个独特而全面的视角下,我们研究了计算机视觉和自然语言处理领域的相关模型:为了持续提高性能,我们看到能源消耗的增长速度比之前预期的要缓慢得多。唯一的警告是,再一次,是乘法因子,因为未来的人工智能越来越渗透并变得更加普遍。