The sequence length along the time axis is often the dominant factor of the computational cost of self-supervised speech models. Works have been proposed to reduce the sequence length for lowering the computational cost. However, different downstream tasks have different tolerance of sequence compressing, so a model that produces a fixed compressing rate may not fit all tasks. In this work, we introduce a once-for-all (OFA) sequence compression framework for self-supervised speech models that supports a continuous range of compressing rates. The framework is evaluated on various tasks, showing marginal degradation compared to the fixed compressing rate variants with a smooth performance-efficiency trade-off. We further explore adaptive compressing rate learning, demonstrating the ability to select task-specific preferred frame periods without needing a grid search.
翻译:时间轴的顺序长度往往是自监督语言模型计算成本的主导因素。 已经提出了降低计算成本的顺序长度的建议。 但是,不同的下游任务对序列压缩有不同的容忍度, 因此产生固定压缩率的模式可能不适应所有任务。 在这项工作中, 我们为自监督语言模型引入一个一劳永逸( OFA)序列压缩框架, 支持一系列连续的压缩速率。 框架在各种任务上进行了评估, 显示与固定压缩速率变量相比,与固定的压缩速率变量相比,略有退化, 具有平稳的性能效率权衡。 我们进一步探索适应性压缩速率学习, 展示在不需要电网搜索的情况下选择特定任务首选框架期的能力 。