Text-to-SQL systems powered by Large Language Models (LLMs) achieve high accuracy on standard benchmarks, yet existing efficiency metrics such as the Valid Efficiency Score (VES) measure execution time rather than the consumption-based costs of cloud data warehouses. This paper presents the first systematic evaluation of cloud compute costs for LLM-generated SQL queries. We evaluate six state-of-the-art LLMs across 180 query executions on Google BigQuery using the StackOverflow dataset (230GB), measuring bytes processed, slot utilization, and estimated cost. Our analysis yields three key findings: (1) reasoning models process 44.5% fewer bytes than standard models while maintaining equivalent correctness (96.7%-100%); (2) execution time correlates weakly with query cost (r=0.16), indicating that speed optimization does not imply cost optimization; and (3) models exhibit up to 3.4x cost variance, with standard models producing outliers exceeding 36GB per query. We identify prevalent inefficiency patterns including missing partition filters and unnecessary full-table scans, and provide deployment guidelines for cost-sensitive enterprise environments.
翻译:基于大型语言模型(LLM)的文本到SQL系统在标准基准测试中实现了高准确率,但现有的效率指标(如有效效率分数VES)衡量的是执行时间,而非云数据仓库基于消耗的实际成本。本文首次对LLM生成的SQL查询进行了云计算成本的系统性评估。我们在Google BigQuery平台上使用StackOverflow数据集(230GB),对六个前沿LLM进行了180次查询执行评估,测量了处理字节数、槽位利用率和估算成本。分析得出三个关键发现:(1)推理模型在保持同等正确率(96.7%-100%)的同时,比标准模型少处理44.5%的字节;(2)执行时间与查询成本相关性较弱(r=0.16),表明速度优化并不等同于成本优化;(3)不同模型的成本差异最高达3.4倍,标准模型产生的异常查询单次处理量超过36GB。我们识别了包括缺失分区过滤器和不必要的全表扫描在内的常见低效模式,并为成本敏感的企业环境提供了部署指南。