Algorithm research focuses primarily on how many operations processors need to do (time complexity). But for many problems, both the runtime and energy used are dominated by memory accesses. In this paper, we present the first broad survey of how algorithmic progress has improved memory usage (space complexity). We analyze 118 of the most important algorithm problems in computer science, reviewing the 800+ algorithms used to solve them. Our results show that space complexity has become much more important in recent years as worries have arisen about memory access bottle-necking performance (the ``memory wall''). In 20% of cases we find that space complexity improvements for large problems (n=1 billion) outpaced improvements in DRAM access speed, suggesting that for these problems algorithmic progress played a larger role than hardware progress in minimizing memory access delays. Increasingly, we also see the emergence of algorithmic Pareto frontiers, where getting better asymptotic time complexity for a problem requires getting worse asymptotic space complexity, and vice-versa. This tension implies that programmers will increasingly need to consider multiple algorithmic options to understand which is best for their particular problem. To help theorists and practitioners alike consider these trade-offs, we have created a reference for them at https://algorithm-wiki.csail.mit.edu.
翻译:算法研究主要关注处理器需要执行多少操作(时间复杂度)。然而对于许多问题,运行时间和能耗主要由内存访问决定。本文首次对算法进展如何改善内存使用(空间复杂度)进行了广泛综述。我们分析了计算机科学中118个最重要的算法问题,回顾了用于解决这些问题的800多种算法。研究结果表明,随着对内存访问性能瓶颈(“内存墙”)的担忧日益凸显,空间复杂度在近年变得愈发重要。在20%的案例中,我们发现针对大规模问题(n=10亿)的空间复杂度改进速度超过了DRAM访问速度的提升,这表明对于这些问题,在最小化内存访问延迟方面,算法进展比硬件进展发挥了更重要的作用。我们还日益观察到算法帕累托前沿的出现:要获得更好的渐近时间复杂度往往需要牺牲渐近空间复杂度,反之亦然。这种张力意味着程序员将越来越需要考虑多种算法选项,以确定最适合其特定问题的方法。为帮助理论研究者与实践者共同权衡这些取舍,我们在https://algorithm-wiki.csail.mit.edu创建了参考资源。