We propose an efficient fine-tuning method for time series foundation models, termed TRACE: Time Series Parameter Efficient Fine-tuning. While pretrained time series foundation models are gaining popularity, they face the following challenges: (1) Unlike natural language tasks, time series data vary in frequency, channel numbers, historical/prediction lengths. For long-term forecasting tasks in particular, tailored fine-tuning can significantly enhance performance.(2) Existing parameter-efficient tuning methods like LoRA remain applicable but require adaptation to temporal characteristics. To address these challenges, our TRACE framework introduces two key innovations: (1) Gated DSIC (Gated Dynamic Simulation Importance Calculation), an unbiased LoRA module importance selection mechanism that ensures conditional parameter consistency before and after masking. Experiments demonstrate that Gated DSIC outperforms common fine-tuning. (2) Reconstructed prediction heads for long-term forecasting tasks, which achieve comparable or superior performance to linear probing heads while drastically reducing parameter counts. Extensive experiments on long-/short-term forecasting, anomaly detection and natural language tasks across diverse datasets, coupled with ablation studies, validate the effectiveness of our method.


翻译:我们提出了一种针对时间序列基础模型的高效微调方法,命名为TRACE:时间序列参数高效微调。尽管预训练的时间序列基础模型日益普及,但它们面临以下挑战:(1)与自然语言任务不同,时间序列数据在频率、通道数、历史/预测长度上存在差异。特别是对于长期预测任务,定制化的微调能够显著提升性能。(2)现有的参数高效调优方法(如LoRA)虽然仍可适用,但需要适应时间序列的时序特性。为解决这些挑战,我们的TRACE框架引入了两项关键创新:(1)门控动态模拟重要性计算(Gated DSIC),这是一种无偏的LoRA模块重要性选择机制,确保在掩码前后保持条件参数的一致性。实验表明,Gated DSIC优于常见的微调方法。(2)针对长期预测任务的重构预测头,在参数数量大幅减少的同时,实现了与线性探测头相当或更优的性能。通过在多样数据集上对长期/短期预测、异常检测和自然语言任务进行广泛实验,并结合消融研究,验证了我们方法的有效性。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员