We investigate algorithmic progress in image classification on ImageNet, perhaps the most well-known test bed for computer vision. We estimate a model, informed by work on neural scaling laws, and infer a decomposition of progress into the scaling of compute, data, and algorithms. Using Shapley values to attribute performance improvements, we find that algorithmic improvements have been roughly as important as the scaling of compute for progress computer vision. Our estimates indicate that algorithmic innovations mostly take the form of compute-augmenting algorithmic advances (which enable researchers to get better performance from less compute), not data-augmenting algorithmic advances. We find that compute-augmenting algorithmic advances are made at a pace more than twice as fast as the rate usually associated with Moore's law. In particular, we estimate that compute-augmenting innovations halve compute requirements every nine months (95\% confidence interval: 4 to 25 months).
翻译:我们调查了图象网图像分类的算法进展,这也许是最著名的计算机视觉测试床。我们估计了一个模型,以神经测量法的工作为根据,并推导进展分解成计算、数据和算法的缩放。我们发现,利用沙普利值将性能改进归为属性,算法改进与计算进步计算机视觉的缩放一样重要。我们的估计表明,算法创新主要采取计算增强算法进步的形式(使研究人员能够从较不计算中得到更好的性能),而不是数据增强算法进步。我们发现,计算增强算法进步的速度比通常与摩尔法相关的速度快一倍以上。特别是,我们估计计算推算创新将每九个月的要求减半(95-信任期:4-25个月 ) 。