Despite rapid progress, ASR evaluation remains saturated with short-form English, and efficiency is rarely reported. We present the Open ASR Leaderboard, a fully reproducible benchmark and interactive leaderboard comparing 60+ open-source and proprietary systems across 11 datasets, including a dedicated multilingual track. We standardize text normalization and report both word error rate (WER) and inverse real-time factor (RTFx), enabling fair accuracy-efficiency comparisons. For English transcription, Conformer encoders paired with LLM decoders achieve the best average WER but are slower, while CTC and TDT decoders deliver much better RTFx, making them attractive for long-form and offline use. Whisper-derived encoders fine-tuned for English improve accuracy but often trade off multilingual coverage. All code and dataset loaders are open-sourced to support transparent, extensible evaluation.
翻译:尽管进展迅速,ASR评估仍主要集中于短时英语,且效率指标鲜有报告。我们提出了开放ASR排行榜,这是一个完全可复现的基准测试与交互式排行榜,比较了超过60个开源与专有系统在11个数据集上的表现,包括专门的多语言赛道。我们标准化了文本归一化流程,并同时报告词错误率(WER)与逆实时因子(RTFx),以实现公平的准确率-效率比较。在英语转录任务中,Conformer编码器与LLM解码器组合实现了最佳平均WER,但速度较慢;而CTC与TDT解码器则提供了显著更优的RTFx,使其在长时音频与离线场景中更具吸引力。针对英语优化的Whisper衍生编码器提升了准确率,但往往牺牲了多语言覆盖能力。所有代码与数据集加载器均已开源,以支持透明、可扩展的评估。