This paper derives `Scaling Laws for Economic Impacts' -- empirical relationships between the training compute of Large Language Models (LLMs) and professional productivity. In a preregistered experiment, over 500 consultants, data analysts, and managers completed professional tasks using one of 13 LLMs. We find that each year of AI model progress reduced task time by 8%, with 56% of gains driven by increased compute and 44% by algorithmic progress. However, productivity gains were significantly larger for non-agentic analytical tasks compared to agentic workflows requiring tool use. These findings suggest continued model scaling could boost U.S. productivity by approximately 20% over the next decade.
翻译:本文推导了“经济影响缩放定律”——即大型语言模型(LLM)的训练计算量与专业生产力之间的实证关系。在一项预先注册的实验中,超过500名咨询师、数据分析师和管理人员使用13种LLM中的一种完成了专业任务。研究发现,AI模型每进步一年可使任务时间减少8%,其中56%的增益源于计算量提升,44%来自算法进步。然而,相较于需要工具使用的智能体工作流,非智能体分析任务的生产力提升显著更大。这些发现表明,持续的模型缩放可能在未来十年内使美国生产力提升约20%。