Model fine-tuning and adaptation have become a common approach for model specialization for downstream tasks or domains. Fine-tuning the entire model or a subset of the parameters using light-weight adaptation has shown considerable success across different specialization tasks. Fine-tuning a model for a large number of domains typically requires starting a new training job for every domain posing scaling limitations. Once these models are trained, deploying them also poses significant scalability challenges for inference for real-time applications. In this paper, building upon prior light-weight adaptation techniques, we propose a modular framework that enables us to substantially improve scalability for model training and inference. We introduce Submodels that can be quickly and dynamically loaded for on-the-fly inference. We also propose multiple approaches for training those Submodels in parallel using an embedding space in the same training job. We test our framework on an extreme use-case which is speech model personalization for atypical speech, requiring a Submodel for each user. We obtain 128x Submodel throughput with a fixed computation budget without a loss of accuracy. We also show that learning a speaker-embedding space can scale further and reduce the amount of personalization training data required per speaker.
翻译:模型的微调和调整已成为为下游任务或领域进行示范专门化的共同方法。利用轻量度适应性调整整个模型或一组参数,在不同的专业化任务中显示出相当的成功。为大量领域的模型进行微调通常要求为每个具有规模限制的领域开始新的培训工作。这些模型一旦经过培训,部署这些模型也给实时应用的推断带来巨大的可缩放性挑战。在本文件中,在先前的轻度适应性技术的基础上,我们提出了一个模块框架,使我们能够大大改进模型培训和推断的可缩放性。我们引入了可以快速和动态地为实时推断而加载的子模型。我们还提出了利用同一培训工作中的嵌入空间平行培训这些子模型的多种方法。我们用极端使用情况来测试我们的框架,即非典型语言的语音模型个人化,需要每个用户的子模型。我们获得了128x子模型,并附有固定的计算预算,而没有丧失准确性。我们还显示每个用户学习演讲人的空间可以进一步扩大和减少个人化培训的数量。