Automatic Speech Recognition (ASR) models have achieved remarkable accuracy in general settings, yet their performance often degrades in domain-specific applications due to data mismatch and linguistic variability. This challenge is amplified for modern Large Language Model (LLM)-based ASR systems, whose massive scale and complex training dynamics make effective fine-tuning non-trivial. To address this gap, this paper proposes a principled and metric-driven fine-tuning framework for adapting both traditional and LLM-based ASR models to specialized domains. The framework emphasizes learning rate optimization based on performance metrics, combined with domain-specific data transformation and augmentation. We empirically evaluate our framework on state-of-the-art models, including Whisper, Whisper-Turbo, and Qwen2-Audio, across multi-domain, multilingual, and multi-length datasets. Our results not only validate the proposed framework but also establish practical protocols for improving domain-specific ASR performance while preventing overfitting.
翻译:自动语音识别(ASR)模型在通用场景下已取得显著准确率,但由于数据不匹配和语言变异性,其在特定领域应用中的性能常出现下降。这一挑战对于现代基于大语言模型(LLM)的ASR系统尤为突出,其庞大的规模与复杂的训练动态使得有效微调变得尤为困难。为弥补这一不足,本文提出一种原理化且度量驱动的微调框架,用于将传统及基于LLM的ASR模型适配至专业领域。该框架强调基于性能指标的动态学习率优化,并结合领域特定的数据转换与增强技术。我们在包括Whisper、Whisper-Turbo和Qwen2-Audio在内的前沿模型上,通过跨领域、多语言及多长度数据集进行了实证评估。实验结果不仅验证了所提框架的有效性,更为提升领域特定ASR性能并防止过拟合建立了实用化协议。